Test Report: Docker_Linux_containerd_arm64 19326

                    
                      35e58bd4f2346c2fce1feaa9162990386c1fdc2b:2024-07-25:35495
                    
                

Test fail (2/336)

Order failed test Duration
38 TestAddons/serial/Volcano 199.85
357 TestStartStop/group/old-k8s-version/serial/SecondStart 377.86
x
+
TestAddons/serial/Volcano (199.85s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 47.563724ms
addons_test.go:897: volcano-scheduler stabilized in 47.626713ms
addons_test.go:913: volcano-controller stabilized in 47.654619ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-48f4d" [d8755cf1-43eb-45ce-a326-2a0383d1307d] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003473609s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-wrhg7" [503a2c32-1328-4e22-8157-54347631c8c2] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.00460471s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-hjrm4" [949e086a-8453-46f7-9462-7e9de2011492] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004082153s
addons_test.go:932: (dbg) Run:  kubectl --context addons-673848 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-673848 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-673848 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e724ff1d-8c9a-4be1-9440-ae0954353a55] Pending
helpers_test.go:344: "test-job-nginx-0" [e724ff1d-8c9a-4be1-9440-ae0954353a55] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-673848 -n addons-673848
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-07-25 18:37:03.595329212 +0000 UTC m=+449.173752475
addons_test.go:964: (dbg) Run:  kubectl --context addons-673848 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-673848 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-8696fd09-2b27-447c-965c-29678c08e7f4
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-97f9r (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-97f9r:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-673848 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-673848 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-673848
helpers_test.go:235: (dbg) docker inspect addons-673848:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a014efe4008393ea6bf8bc5ab51f70b48c9578291ae0b9607c31bdece20af69a",
	        "Created": "2024-07-25T18:30:26.634187625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 438460,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-25T18:30:26.775917298Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/a014efe4008393ea6bf8bc5ab51f70b48c9578291ae0b9607c31bdece20af69a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a014efe4008393ea6bf8bc5ab51f70b48c9578291ae0b9607c31bdece20af69a/hostname",
	        "HostsPath": "/var/lib/docker/containers/a014efe4008393ea6bf8bc5ab51f70b48c9578291ae0b9607c31bdece20af69a/hosts",
	        "LogPath": "/var/lib/docker/containers/a014efe4008393ea6bf8bc5ab51f70b48c9578291ae0b9607c31bdece20af69a/a014efe4008393ea6bf8bc5ab51f70b48c9578291ae0b9607c31bdece20af69a-json.log",
	        "Name": "/addons-673848",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-673848:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-673848",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b24c523a0f662ae2b0d0edd435402ae653bc2c75e63f9f6435d0f2dc59155096-init/diff:/var/lib/docker/overlay2/2f35ea3391cd80b943121d1a194672f5d1b43fa71caefe855446e579999be65e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b24c523a0f662ae2b0d0edd435402ae653bc2c75e63f9f6435d0f2dc59155096/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b24c523a0f662ae2b0d0edd435402ae653bc2c75e63f9f6435d0f2dc59155096/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b24c523a0f662ae2b0d0edd435402ae653bc2c75e63f9f6435d0f2dc59155096/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-673848",
	                "Source": "/var/lib/docker/volumes/addons-673848/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-673848",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-673848",
	                "name.minikube.sigs.k8s.io": "addons-673848",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8999e427c99eb0a3d9c09d8e0e2f896cb9371f7f760b6ea0c2420ee1ff3f9a44",
	            "SandboxKey": "/var/run/docker/netns/8999e427c99e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33167"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-673848": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "75e6a4d5bed963fc252e5011be0222a8f2114eb4cb11ee0d490c47e09e14a026",
	                    "EndpointID": "f18e2283a38fc2d282ec94efdc7622061a925ac379a62304ec6a4e8f626d1ee7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-673848",
	                        "a014efe40083"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-673848 -n addons-673848
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-673848 logs -n 25: (1.568401459s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-389541   | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC |                     |
	|         | -p download-only-389541              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| delete  | -p download-only-389541              | download-only-389541   | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| start   | -o=json --download-only              | download-only-362555   | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC |                     |
	|         | -p download-only-362555              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| delete  | -p download-only-362555              | download-only-362555   | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| start   | -o=json --download-only              | download-only-301222   | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC |                     |
	|         | -p download-only-301222              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0  |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| delete  | -p download-only-301222              | download-only-301222   | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| delete  | -p download-only-389541              | download-only-389541   | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| delete  | -p download-only-362555              | download-only-362555   | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| delete  | -p download-only-301222              | download-only-301222   | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| start   | --download-only -p                   | download-docker-414484 | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC |                     |
	|         | download-docker-414484               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-414484            | download-docker-414484 | jenkins | v1.33.1 | 25 Jul 24 18:30 UTC | 25 Jul 24 18:30 UTC |
	| start   | --download-only -p                   | binary-mirror-233184   | jenkins | v1.33.1 | 25 Jul 24 18:30 UTC |                     |
	|         | binary-mirror-233184                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39159               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-233184              | binary-mirror-233184   | jenkins | v1.33.1 | 25 Jul 24 18:30 UTC | 25 Jul 24 18:30 UTC |
	| addons  | enable dashboard -p                  | addons-673848          | jenkins | v1.33.1 | 25 Jul 24 18:30 UTC |                     |
	|         | addons-673848                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-673848          | jenkins | v1.33.1 | 25 Jul 24 18:30 UTC |                     |
	|         | addons-673848                        |                        |         |         |                     |                     |
	| start   | -p addons-673848 --wait=true         | addons-673848          | jenkins | v1.33.1 | 25 Jul 24 18:30 UTC | 25 Jul 24 18:33 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:30:01
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:30:01.677542  437934 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:30:01.677765  437934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:30:01.677779  437934 out.go:304] Setting ErrFile to fd 2...
	I0725 18:30:01.677785  437934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:30:01.678129  437934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	I0725 18:30:01.678657  437934 out.go:298] Setting JSON to false
	I0725 18:30:01.679669  437934 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7950,"bootTime":1721924251,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0725 18:30:01.679750  437934 start.go:139] virtualization:  
	I0725 18:30:01.681949  437934 out.go:177] * [addons-673848] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0725 18:30:01.684153  437934 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:30:01.684269  437934 notify.go:220] Checking for updates...
	I0725 18:30:01.687923  437934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:30:01.689543  437934 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 18:30:01.691170  437934 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	I0725 18:30:01.692696  437934 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0725 18:30:01.694348  437934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:30:01.696287  437934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:30:01.721318  437934 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0725 18:30:01.721469  437934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 18:30:01.790007  437934 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-25 18:30:01.779609062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 18:30:01.790213  437934 docker.go:307] overlay module found
	I0725 18:30:01.792284  437934 out.go:177] * Using the docker driver based on user configuration
	I0725 18:30:01.793722  437934 start.go:297] selected driver: docker
	I0725 18:30:01.793751  437934 start.go:901] validating driver "docker" against <nil>
	I0725 18:30:01.793786  437934 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:30:01.794708  437934 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 18:30:01.856474  437934 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-25 18:30:01.845680786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 18:30:01.856647  437934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 18:30:01.856892  437934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:30:01.859309  437934 out.go:177] * Using Docker driver with root privileges
	I0725 18:30:01.861257  437934 cni.go:84] Creating CNI manager for ""
	I0725 18:30:01.861283  437934 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0725 18:30:01.861299  437934 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0725 18:30:01.861406  437934 start.go:340] cluster config:
	{Name:addons-673848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-673848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:30:01.864445  437934 out.go:177] * Starting "addons-673848" primary control-plane node in "addons-673848" cluster
	I0725 18:30:01.866751  437934 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0725 18:30:01.869190  437934 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0725 18:30:01.871374  437934 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0725 18:30:01.871444  437934 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	I0725 18:30:01.871455  437934 cache.go:56] Caching tarball of preloaded images
	I0725 18:30:01.871557  437934 preload.go:172] Found /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 18:30:01.871567  437934 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on containerd
	I0725 18:30:01.871648  437934 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0725 18:30:01.871949  437934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/config.json ...
	I0725 18:30:01.871984  437934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/config.json: {Name:mk72c45d8044d4bd6178e01e023de0bd79cd2f94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:30:01.889915  437934 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0725 18:30:01.890098  437934 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0725 18:30:01.890118  437934 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0725 18:30:01.890123  437934 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0725 18:30:01.890131  437934 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0725 18:30:01.890137  437934 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0725 18:30:19.129564  437934 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0725 18:30:19.129608  437934 cache.go:194] Successfully downloaded all kic artifacts
	I0725 18:30:19.129665  437934 start.go:360] acquireMachinesLock for addons-673848: {Name:mk01756565827ff924d7a337cb722fc2afee9d74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 18:30:19.129784  437934 start.go:364] duration metric: took 94.39µs to acquireMachinesLock for "addons-673848"
	I0725 18:30:19.129817  437934 start.go:93] Provisioning new machine with config: &{Name:addons-673848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-673848 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0725 18:30:19.129904  437934 start.go:125] createHost starting for "" (driver="docker")
	I0725 18:30:19.132258  437934 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0725 18:30:19.132538  437934 start.go:159] libmachine.API.Create for "addons-673848" (driver="docker")
	I0725 18:30:19.132576  437934 client.go:168] LocalClient.Create starting
	I0725 18:30:19.132755  437934 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem
	I0725 18:30:19.481772  437934 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/cert.pem
	I0725 18:30:20.043371  437934 cli_runner.go:164] Run: docker network inspect addons-673848 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0725 18:30:20.060629  437934 cli_runner.go:211] docker network inspect addons-673848 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0725 18:30:20.060730  437934 network_create.go:284] running [docker network inspect addons-673848] to gather additional debugging logs...
	I0725 18:30:20.060754  437934 cli_runner.go:164] Run: docker network inspect addons-673848
	W0725 18:30:20.077493  437934 cli_runner.go:211] docker network inspect addons-673848 returned with exit code 1
	I0725 18:30:20.077530  437934 network_create.go:287] error running [docker network inspect addons-673848]: docker network inspect addons-673848: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-673848 not found
	I0725 18:30:20.077545  437934 network_create.go:289] output of [docker network inspect addons-673848]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-673848 not found
	
	** /stderr **
	I0725 18:30:20.077662  437934 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 18:30:20.095078  437934 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001799650}
	I0725 18:30:20.095141  437934 network_create.go:124] attempt to create docker network addons-673848 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0725 18:30:20.095211  437934 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-673848 addons-673848
	I0725 18:30:20.170690  437934 network_create.go:108] docker network addons-673848 192.168.49.0/24 created
	I0725 18:30:20.170728  437934 kic.go:121] calculated static IP "192.168.49.2" for the "addons-673848" container
	I0725 18:30:20.170818  437934 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0725 18:30:20.186562  437934 cli_runner.go:164] Run: docker volume create addons-673848 --label name.minikube.sigs.k8s.io=addons-673848 --label created_by.minikube.sigs.k8s.io=true
	I0725 18:30:20.205077  437934 oci.go:103] Successfully created a docker volume addons-673848
	I0725 18:30:20.205192  437934 cli_runner.go:164] Run: docker run --rm --name addons-673848-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-673848 --entrypoint /usr/bin/test -v addons-673848:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0725 18:30:22.285228  437934 cli_runner.go:217] Completed: docker run --rm --name addons-673848-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-673848 --entrypoint /usr/bin/test -v addons-673848:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib: (2.079989133s)
	I0725 18:30:22.285259  437934 oci.go:107] Successfully prepared a docker volume addons-673848
	I0725 18:30:22.285284  437934 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0725 18:30:22.285302  437934 kic.go:194] Starting extracting preloaded images to volume ...
	I0725 18:30:22.285391  437934 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-673848:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0725 18:30:26.557392  437934 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-673848:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir: (4.271959961s)
	I0725 18:30:26.557425  437934 kic.go:203] duration metric: took 4.272119466s to extract preloaded images to volume ...
	W0725 18:30:26.557576  437934 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0725 18:30:26.557698  437934 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0725 18:30:26.619825  437934 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-673848 --name addons-673848 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-673848 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-673848 --network addons-673848 --ip 192.168.49.2 --volume addons-673848:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7
	I0725 18:30:26.937181  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Running}}
	I0725 18:30:26.961401  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:30:26.990664  437934 cli_runner.go:164] Run: docker exec addons-673848 stat /var/lib/dpkg/alternatives/iptables
	I0725 18:30:27.060621  437934 oci.go:144] the created container "addons-673848" has a running status.
	I0725 18:30:27.060654  437934 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa...
	I0725 18:30:27.607515  437934 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0725 18:30:27.633763  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:30:27.655788  437934 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0725 18:30:27.655809  437934 kic_runner.go:114] Args: [docker exec --privileged addons-673848 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0725 18:30:27.724640  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:30:27.752283  437934 machine.go:94] provisionDockerMachine start ...
	I0725 18:30:27.752395  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:30:27.781288  437934 main.go:141] libmachine: Using SSH client type: native
	I0725 18:30:27.781558  437934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0725 18:30:27.781567  437934 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 18:30:27.922658  437934 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-673848
	
	I0725 18:30:27.922682  437934 ubuntu.go:169] provisioning hostname "addons-673848"
	I0725 18:30:27.922757  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:30:27.942760  437934 main.go:141] libmachine: Using SSH client type: native
	I0725 18:30:27.943065  437934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0725 18:30:27.943077  437934 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-673848 && echo "addons-673848" | sudo tee /etc/hostname
	I0725 18:30:28.093670  437934 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-673848
	
	I0725 18:30:28.093822  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:30:28.115359  437934 main.go:141] libmachine: Using SSH client type: native
	I0725 18:30:28.115603  437934 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33163 <nil> <nil>}
	I0725 18:30:28.115623  437934 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-673848' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-673848/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-673848' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 18:30:28.256646  437934 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 18:30:28.256677  437934 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19326-431487/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-431487/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-431487/.minikube}
	I0725 18:30:28.256697  437934 ubuntu.go:177] setting up certificates
	I0725 18:30:28.256709  437934 provision.go:84] configureAuth start
	I0725 18:30:28.256773  437934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-673848
	I0725 18:30:28.274274  437934 provision.go:143] copyHostCerts
	I0725 18:30:28.274359  437934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-431487/.minikube/ca.pem (1082 bytes)
	I0725 18:30:28.274478  437934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-431487/.minikube/cert.pem (1123 bytes)
	I0725 18:30:28.274531  437934 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-431487/.minikube/key.pem (1679 bytes)
	I0725 18:30:28.274578  437934 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-431487/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca-key.pem org=jenkins.addons-673848 san=[127.0.0.1 192.168.49.2 addons-673848 localhost minikube]
	I0725 18:30:28.865809  437934 provision.go:177] copyRemoteCerts
	I0725 18:30:28.865880  437934 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 18:30:28.865922  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:30:28.885443  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:30:28.984191  437934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 18:30:29.011196  437934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0725 18:30:29.036409  437934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 18:30:29.061181  437934 provision.go:87] duration metric: took 804.458203ms to configureAuth
	I0725 18:30:29.061208  437934 ubuntu.go:193] setting minikube options for container-runtime
	I0725 18:30:29.061393  437934 config.go:182] Loaded profile config "addons-673848": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0725 18:30:29.061402  437934 machine.go:97] duration metric: took 1.309100915s to provisionDockerMachine
	I0725 18:30:29.061410  437934 client.go:171] duration metric: took 9.928824511s to LocalClient.Create
	I0725 18:30:29.061429  437934 start.go:167] duration metric: took 9.928892933s to libmachine.API.Create "addons-673848"
	I0725 18:30:29.061437  437934 start.go:293] postStartSetup for "addons-673848" (driver="docker")
	I0725 18:30:29.061447  437934 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 18:30:29.061500  437934 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 18:30:29.061554  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:30:29.077881  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:30:29.172325  437934 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 18:30:29.175677  437934 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 18:30:29.175714  437934 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 18:30:29.175726  437934 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 18:30:29.175733  437934 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0725 18:30:29.175744  437934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-431487/.minikube/addons for local assets ...
	I0725 18:30:29.175812  437934 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-431487/.minikube/files for local assets ...
	I0725 18:30:29.175843  437934 start.go:296] duration metric: took 114.399505ms for postStartSetup
	I0725 18:30:29.176170  437934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-673848
	I0725 18:30:29.194024  437934 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/config.json ...
	I0725 18:30:29.194328  437934 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 18:30:29.194388  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:30:29.211255  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:30:29.303884  437934 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 18:30:29.308502  437934 start.go:128] duration metric: took 10.178579908s to createHost
	I0725 18:30:29.308530  437934 start.go:83] releasing machines lock for "addons-673848", held for 10.178732004s
	I0725 18:30:29.308604  437934 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-673848
	I0725 18:30:29.324765  437934 ssh_runner.go:195] Run: cat /version.json
	I0725 18:30:29.324798  437934 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 18:30:29.324823  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:30:29.324855  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:30:29.348574  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:30:29.360940  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:30:29.595334  437934 ssh_runner.go:195] Run: systemctl --version
	I0725 18:30:29.600082  437934 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 18:30:29.604466  437934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0725 18:30:29.631172  437934 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0725 18:30:29.631265  437934 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 18:30:29.661288  437934 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0725 18:30:29.661319  437934 start.go:495] detecting cgroup driver to use...
	I0725 18:30:29.661378  437934 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0725 18:30:29.661458  437934 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 18:30:29.674182  437934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 18:30:29.686482  437934 docker.go:217] disabling cri-docker service (if available) ...
	I0725 18:30:29.686571  437934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 18:30:29.701906  437934 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 18:30:29.717171  437934 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 18:30:29.817512  437934 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 18:30:29.920003  437934 docker.go:233] disabling docker service ...
	I0725 18:30:29.920119  437934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 18:30:29.942633  437934 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 18:30:29.955113  437934 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 18:30:30.108554  437934 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 18:30:30.218600  437934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 18:30:30.231662  437934 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 18:30:30.250738  437934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 18:30:30.262736  437934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 18:30:30.273909  437934 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 18:30:30.274035  437934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 18:30:30.285507  437934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 18:30:30.296606  437934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 18:30:30.307679  437934 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 18:30:30.318447  437934 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 18:30:30.328241  437934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 18:30:30.338642  437934 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 18:30:30.349013  437934 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 18:30:30.359543  437934 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 18:30:30.369114  437934 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 18:30:30.378325  437934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:30:30.460879  437934 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 18:30:30.590849  437934 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0725 18:30:30.590954  437934 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0725 18:30:30.594754  437934 start.go:563] Will wait 60s for crictl version
	I0725 18:30:30.594873  437934 ssh_runner.go:195] Run: which crictl
	I0725 18:30:30.598712  437934 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 18:30:30.639255  437934 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0725 18:30:30.639379  437934 ssh_runner.go:195] Run: containerd --version
	I0725 18:30:30.662656  437934 ssh_runner.go:195] Run: containerd --version
	I0725 18:30:30.695521  437934 out.go:177] * Preparing Kubernetes v1.30.3 on containerd 1.7.19 ...
	I0725 18:30:30.697600  437934 cli_runner.go:164] Run: docker network inspect addons-673848 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 18:30:30.713729  437934 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0725 18:30:30.717510  437934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:30:30.728545  437934 kubeadm.go:883] updating cluster {Name:addons-673848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-673848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 18:30:30.728675  437934 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0725 18:30:30.728740  437934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:30:30.768204  437934 containerd.go:627] all images are preloaded for containerd runtime.
	I0725 18:30:30.768232  437934 containerd.go:534] Images already preloaded, skipping extraction
	I0725 18:30:30.768298  437934 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 18:30:30.818142  437934 containerd.go:627] all images are preloaded for containerd runtime.
	I0725 18:30:30.818166  437934 cache_images.go:84] Images are preloaded, skipping loading
	I0725 18:30:30.818176  437934 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 containerd true true} ...
	I0725 18:30:30.818278  437934 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-673848 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-673848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 18:30:30.818346  437934 ssh_runner.go:195] Run: sudo crictl info
	I0725 18:30:30.857087  437934 cni.go:84] Creating CNI manager for ""
	I0725 18:30:30.857114  437934 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0725 18:30:30.857123  437934 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 18:30:30.857146  437934 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-673848 NodeName:addons-673848 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 18:30:30.857283  437934 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-673848"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 18:30:30.857357  437934 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 18:30:30.866460  437934 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 18:30:30.866548  437934 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 18:30:30.875209  437934 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0725 18:30:30.893488  437934 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 18:30:30.912138  437934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0725 18:30:30.930853  437934 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0725 18:30:30.934289  437934 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 18:30:30.945265  437934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:30:31.026036  437934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:30:31.043884  437934 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848 for IP: 192.168.49.2
	I0725 18:30:31.043909  437934 certs.go:194] generating shared ca certs ...
	I0725 18:30:31.043925  437934 certs.go:226] acquiring lock for ca certs: {Name:mk41d7b1e7cb52699a093c81e00768f54d73ad8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:30:31.044053  437934 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-431487/.minikube/ca.key
	I0725 18:30:32.437748  437934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-431487/.minikube/ca.crt ...
	I0725 18:30:32.437787  437934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/ca.crt: {Name:mk6927da53a5c109595c2a84a1ddaeb2c3ffb6f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:30:32.437999  437934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-431487/.minikube/ca.key ...
	I0725 18:30:32.438013  437934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/ca.key: {Name:mk07dd5b8e1da5b7769d703e6029f3f953c08816 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:30:32.438122  437934 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.key
	I0725 18:30:32.727960  437934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.crt ...
	I0725 18:30:32.727988  437934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.crt: {Name:mk310c0cfd432272a3dfd86f61d00328d67a1cc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:30:32.728177  437934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.key ...
	I0725 18:30:32.728190  437934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.key: {Name:mk2c8aff8edf86c2b617eb25b263d484f068c43f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:30:32.728277  437934 certs.go:256] generating profile certs ...
	I0725 18:30:32.728341  437934 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.key
	I0725 18:30:32.728358  437934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt with IP's: []
	I0725 18:30:33.114886  437934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt ...
	I0725 18:30:33.114920  437934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: {Name:mk9f860d41a6e913ae2dfa2c0c6719c6f967db0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:30:33.115132  437934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.key ...
	I0725 18:30:33.115148  437934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.key: {Name:mka12a2a46c9546a3b971746c9dfc460fc062f0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:30:33.115227  437934 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/apiserver.key.01926569
	I0725 18:30:33.115251  437934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/apiserver.crt.01926569 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0725 18:30:33.747091  437934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/apiserver.crt.01926569 ...
	I0725 18:30:33.747124  437934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/apiserver.crt.01926569: {Name:mk8877d57b521629bc3160a99ddc78023e6b431f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:30:33.747312  437934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/apiserver.key.01926569 ...
	I0725 18:30:33.747329  437934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/apiserver.key.01926569: {Name:mk8753cb335e5153bbfd30344aba4ea3e6b9b8ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:30:33.747413  437934 certs.go:381] copying /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/apiserver.crt.01926569 -> /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/apiserver.crt
	I0725 18:30:33.747493  437934 certs.go:385] copying /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/apiserver.key.01926569 -> /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/apiserver.key
	I0725 18:30:33.747548  437934 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/proxy-client.key
	I0725 18:30:33.747570  437934 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/proxy-client.crt with IP's: []
	I0725 18:30:33.881002  437934 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/proxy-client.crt ...
	I0725 18:30:33.881033  437934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/proxy-client.crt: {Name:mkf80fec04f043b67635900a23dbd3aece2eaa5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:30:33.881215  437934 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/proxy-client.key ...
	I0725 18:30:33.881230  437934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/proxy-client.key: {Name:mkae2d448eaef8f0b0d3b152df1372cab4e6382a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:30:33.881425  437934 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca-key.pem (1675 bytes)
	I0725 18:30:33.881470  437934 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem (1082 bytes)
	I0725 18:30:33.881502  437934 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/cert.pem (1123 bytes)
	I0725 18:30:33.881532  437934 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/key.pem (1679 bytes)
	I0725 18:30:33.882171  437934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 18:30:33.907758  437934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 18:30:33.932960  437934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 18:30:33.957614  437934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 18:30:33.982710  437934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0725 18:30:34.016147  437934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 18:30:34.044192  437934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 18:30:34.072623  437934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 18:30:34.098788  437934 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 18:30:34.128621  437934 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 18:30:34.149671  437934 ssh_runner.go:195] Run: openssl version
	I0725 18:30:34.156111  437934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 18:30:34.167146  437934 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:30:34.171549  437934 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:30:34.171666  437934 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 18:30:34.182171  437934 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 18:30:34.192278  437934 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 18:30:34.195897  437934 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 18:30:34.195953  437934 kubeadm.go:392] StartCluster: {Name:addons-673848 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-673848 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:30:34.196033  437934 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0725 18:30:34.196090  437934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 18:30:34.234720  437934 cri.go:89] found id: ""
	I0725 18:30:34.234799  437934 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 18:30:34.244598  437934 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 18:30:34.254497  437934 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0725 18:30:34.254565  437934 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 18:30:34.264323  437934 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 18:30:34.264341  437934 kubeadm.go:157] found existing configuration files:
	
	I0725 18:30:34.264403  437934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 18:30:34.273418  437934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 18:30:34.273502  437934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 18:30:34.282130  437934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 18:30:34.291361  437934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 18:30:34.291426  437934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 18:30:34.300181  437934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 18:30:34.309235  437934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 18:30:34.309303  437934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 18:30:34.317900  437934 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 18:30:34.326858  437934 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 18:30:34.326928  437934 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 18:30:34.335755  437934 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 18:30:34.382178  437934 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 18:30:34.382504  437934 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 18:30:34.422604  437934 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0725 18:30:34.422675  437934 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1065-aws
	I0725 18:30:34.422714  437934 kubeadm.go:310] OS: Linux
	I0725 18:30:34.422761  437934 kubeadm.go:310] CGROUPS_CPU: enabled
	I0725 18:30:34.422813  437934 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0725 18:30:34.422862  437934 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0725 18:30:34.422912  437934 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0725 18:30:34.422982  437934 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0725 18:30:34.423035  437934 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0725 18:30:34.423082  437934 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0725 18:30:34.423133  437934 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0725 18:30:34.423184  437934 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0725 18:30:34.494592  437934 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 18:30:34.494707  437934 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 18:30:34.495395  437934 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 18:30:34.736795  437934 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 18:30:34.741643  437934 out.go:204]   - Generating certificates and keys ...
	I0725 18:30:34.741749  437934 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 18:30:34.741818  437934 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 18:30:35.106091  437934 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 18:30:35.494948  437934 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 18:30:36.241831  437934 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 18:30:36.420196  437934 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 18:30:36.704755  437934 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 18:30:36.705037  437934 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-673848 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0725 18:30:36.930081  437934 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 18:30:36.930574  437934 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-673848 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0725 18:30:37.169789  437934 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 18:30:38.434731  437934 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 18:30:38.729757  437934 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 18:30:38.730010  437934 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 18:30:39.392100  437934 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 18:30:39.758152  437934 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 18:30:40.134806  437934 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 18:30:40.399369  437934 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 18:30:41.619333  437934 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 18:30:41.619437  437934 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 18:30:41.622516  437934 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 18:30:41.625432  437934 out.go:204]   - Booting up control plane ...
	I0725 18:30:41.625550  437934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 18:30:41.625650  437934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 18:30:41.629156  437934 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 18:30:41.646319  437934 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 18:30:41.648304  437934 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 18:30:41.648363  437934 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 18:30:41.743440  437934 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 18:30:41.743531  437934 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 18:30:44.239893  437934 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.501371289s
	I0725 18:30:44.240001  437934 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 18:30:50.241807  437934 kubeadm.go:310] [api-check] The API server is healthy after 6.001864423s
	I0725 18:30:50.262177  437934 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 18:30:50.277971  437934 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 18:30:50.305529  437934 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 18:30:50.305721  437934 kubeadm.go:310] [mark-control-plane] Marking the node addons-673848 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 18:30:50.317038  437934 kubeadm.go:310] [bootstrap-token] Using token: 0fst7n.ys3qfw1db6d257zm
	I0725 18:30:50.318894  437934 out.go:204]   - Configuring RBAC rules ...
	I0725 18:30:50.319051  437934 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 18:30:50.326321  437934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 18:30:50.334457  437934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 18:30:50.338722  437934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 18:30:50.345654  437934 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 18:30:50.350745  437934 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 18:30:50.649443  437934 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 18:30:51.098641  437934 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 18:30:51.648487  437934 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 18:30:51.649634  437934 kubeadm.go:310] 
	I0725 18:30:51.649730  437934 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 18:30:51.649743  437934 kubeadm.go:310] 
	I0725 18:30:51.649824  437934 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 18:30:51.649829  437934 kubeadm.go:310] 
	I0725 18:30:51.649854  437934 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 18:30:51.649911  437934 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 18:30:51.649968  437934 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 18:30:51.649974  437934 kubeadm.go:310] 
	I0725 18:30:51.650025  437934 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 18:30:51.650030  437934 kubeadm.go:310] 
	I0725 18:30:51.650081  437934 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 18:30:51.650086  437934 kubeadm.go:310] 
	I0725 18:30:51.650136  437934 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 18:30:51.650207  437934 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 18:30:51.650273  437934 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 18:30:51.650281  437934 kubeadm.go:310] 
	I0725 18:30:51.650362  437934 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 18:30:51.650435  437934 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 18:30:51.650440  437934 kubeadm.go:310] 
	I0725 18:30:51.650519  437934 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 0fst7n.ys3qfw1db6d257zm \
	I0725 18:30:51.650620  437934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1d9d423c7b2bb24c3d3f38c79b211d8531f40310f79a51ed9602d13a57b81b8c \
	I0725 18:30:51.650640  437934 kubeadm.go:310] 	--control-plane 
	I0725 18:30:51.650644  437934 kubeadm.go:310] 
	I0725 18:30:51.650729  437934 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 18:30:51.650734  437934 kubeadm.go:310] 
	I0725 18:30:51.650812  437934 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 0fst7n.ys3qfw1db6d257zm \
	I0725 18:30:51.650910  437934 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1d9d423c7b2bb24c3d3f38c79b211d8531f40310f79a51ed9602d13a57b81b8c 
	I0725 18:30:51.653128  437934 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1065-aws\n", err: exit status 1
	I0725 18:30:51.653248  437934 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 18:30:51.653267  437934 cni.go:84] Creating CNI manager for ""
	I0725 18:30:51.653275  437934 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0725 18:30:51.655468  437934 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0725 18:30:51.657522  437934 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0725 18:30:51.661372  437934 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0725 18:30:51.661391  437934 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0725 18:30:51.681111  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0725 18:30:51.951428  437934 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 18:30:51.951640  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-673848 minikube.k8s.io/updated_at=2024_07_25T18_30_51_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=addons-673848 minikube.k8s.io/primary=true
	I0725 18:30:51.951572  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:52.136933  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:52.137032  437934 ops.go:34] apiserver oom_adj: -16
	I0725 18:30:52.637993  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:53.137552  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:53.637654  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:54.137254  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:54.637919  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:55.137092  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:55.637648  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:56.137717  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:56.637822  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:57.137960  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:57.637089  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:58.137657  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:58.637137  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:59.137417  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:30:59.637374  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:31:00.154718  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:31:00.637089  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:31:01.137776  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:31:01.637296  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:31:02.137644  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:31:02.637674  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:31:03.137815  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:31:03.637082  437934 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 18:31:03.734678  437934 kubeadm.go:1113] duration metric: took 11.78316384s to wait for elevateKubeSystemPrivileges
	I0725 18:31:03.734711  437934 kubeadm.go:394] duration metric: took 29.538760915s to StartCluster
	I0725 18:31:03.734735  437934 settings.go:142] acquiring lock: {Name:mk69edff96840eebb76289a50cf78daf601fe5de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:31:03.734864  437934 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 18:31:03.735318  437934 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/kubeconfig: {Name:mk3cdfe1101bbbc0f7441d92cff5cd6b29ee3404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 18:31:03.735569  437934 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0725 18:31:03.735726  437934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 18:31:03.736015  437934 config.go:182] Loaded profile config "addons-673848": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0725 18:31:03.736055  437934 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0725 18:31:03.736151  437934 addons.go:69] Setting yakd=true in profile "addons-673848"
	I0725 18:31:03.736180  437934 addons.go:234] Setting addon yakd=true in "addons-673848"
	I0725 18:31:03.736208  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.736683  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.737298  437934 addons.go:69] Setting metrics-server=true in profile "addons-673848"
	I0725 18:31:03.737325  437934 addons.go:234] Setting addon metrics-server=true in "addons-673848"
	I0725 18:31:03.737353  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.737744  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.737870  437934 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-673848"
	I0725 18:31:03.737890  437934 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-673848"
	I0725 18:31:03.737911  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.738275  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.740610  437934 addons.go:69] Setting registry=true in profile "addons-673848"
	I0725 18:31:03.740648  437934 addons.go:234] Setting addon registry=true in "addons-673848"
	I0725 18:31:03.740696  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.741125  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.742045  437934 out.go:177] * Verifying Kubernetes components...
	I0725 18:31:03.742289  437934 addons.go:69] Setting cloud-spanner=true in profile "addons-673848"
	I0725 18:31:03.742321  437934 addons.go:234] Setting addon cloud-spanner=true in "addons-673848"
	I0725 18:31:03.742361  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.742776  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.745119  437934 addons.go:69] Setting storage-provisioner=true in profile "addons-673848"
	I0725 18:31:03.745225  437934 addons.go:234] Setting addon storage-provisioner=true in "addons-673848"
	I0725 18:31:03.745268  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.747160  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.750537  437934 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-673848"
	I0725 18:31:03.750655  437934 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-673848"
	I0725 18:31:03.750716  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.751248  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.755622  437934 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-673848"
	I0725 18:31:03.755738  437934 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-673848"
	I0725 18:31:03.757698  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.765626  437934 addons.go:69] Setting default-storageclass=true in profile "addons-673848"
	I0725 18:31:03.765732  437934 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-673848"
	I0725 18:31:03.766177  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.775431  437934 addons.go:69] Setting volcano=true in profile "addons-673848"
	I0725 18:31:03.775484  437934 addons.go:234] Setting addon volcano=true in "addons-673848"
	I0725 18:31:03.775533  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.775999  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.778902  437934 addons.go:69] Setting gcp-auth=true in profile "addons-673848"
	I0725 18:31:03.779012  437934 mustload.go:65] Loading cluster: addons-673848
	I0725 18:31:03.779219  437934 config.go:182] Loaded profile config "addons-673848": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0725 18:31:03.779545  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.799051  437934 addons.go:69] Setting volumesnapshots=true in profile "addons-673848"
	I0725 18:31:03.799108  437934 addons.go:234] Setting addon volumesnapshots=true in "addons-673848"
	I0725 18:31:03.799145  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.799353  437934 addons.go:69] Setting ingress-dns=true in profile "addons-673848"
	I0725 18:31:03.799402  437934 addons.go:234] Setting addon ingress-dns=true in "addons-673848"
	I0725 18:31:03.799462  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.799614  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.799947  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.799051  437934 addons.go:69] Setting ingress=true in profile "addons-673848"
	I0725 18:31:03.818341  437934 addons.go:234] Setting addon ingress=true in "addons-673848"
	I0725 18:31:03.818396  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.818820  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.819060  437934 addons.go:69] Setting inspektor-gadget=true in profile "addons-673848"
	I0725 18:31:03.819092  437934 addons.go:234] Setting addon inspektor-gadget=true in "addons-673848"
	I0725 18:31:03.819125  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.819493  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.845989  437934 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 18:31:03.973761  437934 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 18:31:03.979313  437934 out.go:177]   - Using image docker.io/registry:2.8.3
	I0725 18:31:03.980273  437934 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0725 18:31:03.970095  437934 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-673848"
	I0725 18:31:03.984516  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:03.985569  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:03.986140  437934 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0725 18:31:03.986306  437934 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0725 18:31:03.989335  437934 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0725 18:31:03.989426  437934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0725 18:31:03.989619  437934 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0725 18:31:03.989639  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0725 18:31:03.989718  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:03.989755  437934 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 18:31:03.989805  437934 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 18:31:03.989943  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.011318  437934 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0725 18:31:04.011400  437934 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0725 18:31:04.011512  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.015201  437934 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0725 18:31:04.019101  437934 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0725 18:31:04.019178  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0725 18:31:04.019287  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.027449  437934 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0725 18:31:04.027513  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0725 18:31:04.027610  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.042991  437934 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 18:31:04.044974  437934 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:31:04.044998  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 18:31:04.045078  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.061619  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:04.066730  437934 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0725 18:31:04.066966  437934 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0725 18:31:04.067074  437934 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0725 18:31:04.068203  437934 addons.go:234] Setting addon default-storageclass=true in "addons-673848"
	I0725 18:31:04.068298  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:04.068833  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:04.079272  437934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0725 18:31:04.079300  437934 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0725 18:31:04.079371  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.067134  437934 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0725 18:31:04.089548  437934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0725 18:31:04.089691  437934 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0725 18:31:04.089707  437934 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0725 18:31:04.089789  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.103429  437934 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0725 18:31:04.103449  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0725 18:31:04.103514  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.117900  437934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0725 18:31:04.121880  437934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0725 18:31:04.123768  437934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0725 18:31:04.126615  437934 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0725 18:31:04.128690  437934 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0725 18:31:04.128746  437934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0725 18:31:04.130820  437934 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0725 18:31:04.130855  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0725 18:31:04.130950  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.156117  437934 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0725 18:31:04.156146  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0725 18:31:04.156232  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.189876  437934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0725 18:31:04.193432  437934 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0725 18:31:04.200488  437934 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0725 18:31:04.221115  437934 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0725 18:31:04.226690  437934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0725 18:31:04.228766  437934 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0725 18:31:04.231131  437934 out.go:177]   - Using image docker.io/busybox:stable
	I0725 18:31:04.233547  437934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0725 18:31:04.233678  437934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0725 18:31:04.233764  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.239127  437934 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0725 18:31:04.239238  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0725 18:31:04.239359  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.254099  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:04.259392  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:04.265114  437934 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 18:31:04.266127  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:04.291739  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:04.302107  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:04.304634  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:04.310814  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:04.318065  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:04.325918  437934 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 18:31:04.325940  437934 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 18:31:04.326003  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:04.337599  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:04.342782  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:04.391229  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:04.403017  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:04.407321  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	W0725 18:31:04.417198  437934 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0725 18:31:04.417229  437934 retry.go:31] will retry after 288.762877ms: ssh: handshake failed: EOF
	I0725 18:31:04.419430  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	W0725 18:31:04.420436  437934 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0725 18:31:04.420459  437934 retry.go:31] will retry after 367.770064ms: ssh: handshake failed: EOF
	I0725 18:31:04.970270  437934 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 18:31:04.970343  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0725 18:31:05.053997  437934 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0725 18:31:05.054071  437934 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0725 18:31:05.061347  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 18:31:05.123583  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0725 18:31:05.148396  437934 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0725 18:31:05.148426  437934 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0725 18:31:05.173639  437934 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0725 18:31:05.173718  437934 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0725 18:31:05.203470  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0725 18:31:05.354532  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0725 18:31:05.380160  437934 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0725 18:31:05.380234  437934 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0725 18:31:05.404960  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0725 18:31:05.479382  437934 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0725 18:31:05.479460  437934 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0725 18:31:05.486032  437934 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0725 18:31:05.486059  437934 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0725 18:31:05.489500  437934 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 18:31:05.489524  437934 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 18:31:05.506324  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0725 18:31:05.529715  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0725 18:31:05.637636  437934 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0725 18:31:05.637711  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0725 18:31:05.645360  437934 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0725 18:31:05.645433  437934 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0725 18:31:05.678964  437934 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0725 18:31:05.679037  437934 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0725 18:31:05.705304  437934 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0725 18:31:05.705381  437934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0725 18:31:05.752895  437934 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0725 18:31:05.752970  437934 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0725 18:31:05.788310  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 18:31:05.796386  437934 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:31:05.796467  437934 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 18:31:05.845116  437934 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0725 18:31:05.845195  437934 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0725 18:31:05.927181  437934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0725 18:31:05.927209  437934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0725 18:31:05.943337  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0725 18:31:05.995721  437934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0725 18:31:05.995749  437934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0725 18:31:06.060617  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 18:31:06.065498  437934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0725 18:31:06.065528  437934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0725 18:31:06.081884  437934 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.108073273s)
	I0725 18:31:06.081918  437934 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0725 18:31:06.083159  437934 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.818015772s)
	I0725 18:31:06.083987  437934 node_ready.go:35] waiting up to 6m0s for node "addons-673848" to be "Ready" ...
	I0725 18:31:06.089210  437934 node_ready.go:49] node "addons-673848" has status "Ready":"True"
	I0725 18:31:06.089241  437934 node_ready.go:38] duration metric: took 5.226094ms for node "addons-673848" to be "Ready" ...
	I0725 18:31:06.089252  437934 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:31:06.108264  437934 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0725 18:31:06.108299  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0725 18:31:06.122618  437934 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dc7b5" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:06.140535  437934 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0725 18:31:06.140562  437934 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0725 18:31:06.370792  437934 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0725 18:31:06.370864  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0725 18:31:06.515145  437934 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0725 18:31:06.515173  437934 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0725 18:31:06.521975  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0725 18:31:06.585746  437934 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-673848" context rescaled to 1 replicas
	I0725 18:31:06.702265  437934 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0725 18:31:06.702297  437934 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0725 18:31:06.789612  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0725 18:31:06.918009  437934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0725 18:31:06.918036  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0725 18:31:07.133540  437934 pod_ready.go:97] pod "coredns-7db6d8ff4d-dc7b5" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 18:31:04 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 18:31:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 18:31:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 18:31:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 18:31:04 +0000 UTC Reason: Message:}] Messa
ge: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2}] PodIP: PodIPs:[] StartTime:2024-07-25 18:31:04 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0x4000861eba AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0725 18:31:07.133622  437934 pod_ready.go:81] duration metric: took 1.010925381s for pod "coredns-7db6d8ff4d-dc7b5" in "kube-system" namespace to be "Ready" ...
	E0725 18:31:07.133650  437934 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-dc7b5" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 18:31:04 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 18:31:04 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 18:31:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 18:31:04 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-25 18:31:04 +0000
UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2}] PodIP: PodIPs:[] StartTime:2024-07-25 18:31:04 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0x4000861eba AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0725 18:31:07.133685  437934 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:07.189297  437934 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0725 18:31:07.189325  437934 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0725 18:31:07.411075  437934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0725 18:31:07.411159  437934 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0725 18:31:07.572733  437934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0725 18:31:07.572806  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0725 18:31:07.662538  437934 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0725 18:31:07.662559  437934 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0725 18:31:07.919407  437934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0725 18:31:07.919476  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0725 18:31:08.058039  437934 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0725 18:31:08.058132  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0725 18:31:08.388065  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0725 18:31:08.473462  437934 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0725 18:31:08.473542  437934 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0725 18:31:08.772558  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0725 18:31:08.997197  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.935753253s)
	I0725 18:31:09.139794  437934 pod_ready.go:102] pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace has status "Ready":"False"
	I0725 18:31:11.179460  437934 pod_ready.go:102] pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace has status "Ready":"False"
	I0725 18:31:11.285976  437934 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0725 18:31:11.286140  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:11.318758  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:11.981597  437934 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0725 18:31:12.122784  437934 addons.go:234] Setting addon gcp-auth=true in "addons-673848"
	I0725 18:31:12.122892  437934 host.go:66] Checking if "addons-673848" exists ...
	I0725 18:31:12.123421  437934 cli_runner.go:164] Run: docker container inspect addons-673848 --format={{.State.Status}}
	I0725 18:31:12.155112  437934 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0725 18:31:12.155172  437934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-673848
	I0725 18:31:12.187155  437934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/addons-673848/id_rsa Username:docker}
	I0725 18:31:13.075200  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.951534015s)
	I0725 18:31:13.075380  437934 addons.go:475] Verifying addon ingress=true in "addons-673848"
	I0725 18:31:13.075311  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.871757042s)
	I0725 18:31:13.079852  437934 out.go:177] * Verifying ingress addon...
	I0725 18:31:13.084382  437934 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0725 18:31:13.107492  437934 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0725 18:31:13.107521  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:13.604674  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:13.663689  437934 pod_ready.go:102] pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace has status "Ready":"False"
	I0725 18:31:14.109453  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:14.388923  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.034300127s)
	I0725 18:31:14.389002  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.983973962s)
	I0725 18:31:14.389184  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.882840551s)
	I0725 18:31:14.389220  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.859438705s)
	I0725 18:31:14.389246  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.60087001s)
	I0725 18:31:14.389342  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.445978084s)
	I0725 18:31:14.389360  437934 addons.go:475] Verifying addon registry=true in "addons-673848"
	I0725 18:31:14.389572  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.32892733s)
	I0725 18:31:14.389590  437934 addons.go:475] Verifying addon metrics-server=true in "addons-673848"
	I0725 18:31:14.389698  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.867671251s)
	W0725 18:31:14.389733  437934 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0725 18:31:14.389751  437934 retry.go:31] will retry after 233.893196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0725 18:31:14.389794  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.600146468s)
	I0725 18:31:14.390032  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.001894075s)
	I0725 18:31:14.392558  437934 out.go:177] * Verifying registry addon...
	I0725 18:31:14.401569  437934 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-673848 service yakd-dashboard -n yakd-dashboard
	
	I0725 18:31:14.403911  437934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0725 18:31:14.451506  437934 out.go:239] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0725 18:31:14.500874  437934 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0725 18:31:14.500942  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:14.597127  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:14.624473  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0725 18:31:14.912916  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:15.089830  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:15.402639  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.629987908s)
	I0725 18:31:15.402671  437934 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-673848"
	I0725 18:31:15.402826  437934 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.247690086s)
	I0725 18:31:15.408031  437934 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0725 18:31:15.408031  437934 out.go:177] * Verifying csi-hostpath-driver addon...
	I0725 18:31:15.410068  437934 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0725 18:31:15.411021  437934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0725 18:31:15.412022  437934 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0725 18:31:15.412046  437934 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0725 18:31:15.436967  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:15.439449  437934 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0725 18:31:15.439523  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:15.525699  437934 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0725 18:31:15.525774  437934 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0725 18:31:15.594323  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:15.598267  437934 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0725 18:31:15.598344  437934 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0725 18:31:15.756015  437934 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0725 18:31:15.909763  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:15.917491  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:16.089330  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:16.141432  437934 pod_ready.go:102] pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace has status "Ready":"False"
	I0725 18:31:16.172362  437934 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.54778605s)
	I0725 18:31:16.410027  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:16.418651  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:16.599267  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:16.681745  437934 addons.go:475] Verifying addon gcp-auth=true in "addons-673848"
	I0725 18:31:16.684115  437934 out.go:177] * Verifying gcp-auth addon...
	I0725 18:31:16.687381  437934 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0725 18:31:16.693423  437934 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0725 18:31:16.908955  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:16.916613  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:17.088812  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:17.409638  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:17.417526  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:17.599181  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:17.909426  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:17.917723  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:18.090154  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:18.410017  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:18.416982  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:18.589639  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:18.640358  437934 pod_ready.go:102] pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace has status "Ready":"False"
	I0725 18:31:18.910518  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:18.918776  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:19.090112  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:19.409834  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:19.423207  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:19.602925  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:19.908884  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:19.916986  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:20.090469  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:20.408684  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:20.417828  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:20.588912  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:20.640456  437934 pod_ready.go:102] pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace has status "Ready":"False"
	I0725 18:31:20.912299  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:20.917286  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:21.090027  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:21.409084  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:21.417877  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:21.589783  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:21.908864  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:21.916590  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:22.089099  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:22.411249  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:22.419042  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:22.589042  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:22.642155  437934 pod_ready.go:102] pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace has status "Ready":"False"
	I0725 18:31:22.908845  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:22.916047  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:23.089517  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:23.410367  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:23.417673  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:23.589846  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:23.909028  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:23.917718  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:24.091787  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:24.409802  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:24.416670  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:24.590013  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:24.642424  437934 pod_ready.go:102] pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace has status "Ready":"False"
	I0725 18:31:24.910162  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:24.918132  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:25.090222  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:25.408813  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:25.416607  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:25.589397  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:25.909958  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:25.916497  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:26.089541  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:26.409561  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:26.417316  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:26.589217  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:26.909107  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:26.917065  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:27.089959  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:27.140435  437934 pod_ready.go:102] pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace has status "Ready":"False"
	I0725 18:31:27.410031  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:27.416672  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:27.589444  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:27.909522  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:27.917701  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:28.088910  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:28.408378  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:28.416735  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:28.589263  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:28.908991  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:28.916960  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:29.089255  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:29.141060  437934 pod_ready.go:102] pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace has status "Ready":"False"
	I0725 18:31:29.409513  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:29.421682  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:29.589044  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:29.909426  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:29.916998  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:30.098802  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:30.412640  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:30.423559  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:30.590049  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:30.909526  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:30.924787  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:31.109473  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:31.146340  437934 pod_ready.go:102] pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace has status "Ready":"False"
	I0725 18:31:31.408870  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:31.416680  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:31.589805  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:31.909107  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:31.917157  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:32.094357  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:32.413035  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:32.425815  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:32.588530  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:32.640546  437934 pod_ready.go:92] pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace has status "Ready":"True"
	I0725 18:31:32.640572  437934 pod_ready.go:81] duration metric: took 25.506862445s for pod "coredns-7db6d8ff4d-g5k44" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.640587  437934 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-673848" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.646194  437934 pod_ready.go:92] pod "etcd-addons-673848" in "kube-system" namespace has status "Ready":"True"
	I0725 18:31:32.646219  437934 pod_ready.go:81] duration metric: took 5.624142ms for pod "etcd-addons-673848" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.646236  437934 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-673848" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.657140  437934 pod_ready.go:92] pod "kube-apiserver-addons-673848" in "kube-system" namespace has status "Ready":"True"
	I0725 18:31:32.657163  437934 pod_ready.go:81] duration metric: took 10.917706ms for pod "kube-apiserver-addons-673848" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.657176  437934 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-673848" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.663073  437934 pod_ready.go:92] pod "kube-controller-manager-addons-673848" in "kube-system" namespace has status "Ready":"True"
	I0725 18:31:32.663100  437934 pod_ready.go:81] duration metric: took 5.914426ms for pod "kube-controller-manager-addons-673848" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.663113  437934 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-csqkx" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.668470  437934 pod_ready.go:92] pod "kube-proxy-csqkx" in "kube-system" namespace has status "Ready":"True"
	I0725 18:31:32.668496  437934 pod_ready.go:81] duration metric: took 5.374194ms for pod "kube-proxy-csqkx" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.668508  437934 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-673848" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:32.909702  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:32.917831  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:33.038163  437934 pod_ready.go:92] pod "kube-scheduler-addons-673848" in "kube-system" namespace has status "Ready":"True"
	I0725 18:31:33.038268  437934 pod_ready.go:81] duration metric: took 369.750415ms for pod "kube-scheduler-addons-673848" in "kube-system" namespace to be "Ready" ...
	I0725 18:31:33.038292  437934 pod_ready.go:38] duration metric: took 26.949028039s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 18:31:33.038333  437934 api_server.go:52] waiting for apiserver process to appear ...
	I0725 18:31:33.038419  437934 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:31:33.065061  437934 api_server.go:72] duration metric: took 29.329460804s to wait for apiserver process to appear ...
	I0725 18:31:33.065142  437934 api_server.go:88] waiting for apiserver healthz status ...
	I0725 18:31:33.065177  437934 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0725 18:31:33.073023  437934 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0725 18:31:33.074389  437934 api_server.go:141] control plane version: v1.30.3
	I0725 18:31:33.074436  437934 api_server.go:131] duration metric: took 9.273726ms to wait for apiserver health ...
	I0725 18:31:33.074446  437934 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 18:31:33.089028  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:33.245609  437934 system_pods.go:59] 18 kube-system pods found
	I0725 18:31:33.245712  437934 system_pods.go:61] "coredns-7db6d8ff4d-g5k44" [75bfb1df-e1f0-4f40-9709-48a0a937e47c] Running
	I0725 18:31:33.245740  437934 system_pods.go:61] "csi-hostpath-attacher-0" [f00006a6-9598-43d9-a438-a5e575ef46b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0725 18:31:33.245783  437934 system_pods.go:61] "csi-hostpath-resizer-0" [58e1fff6-9b7b-40da-b2e1-ce4be1821dba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0725 18:31:33.245813  437934 system_pods.go:61] "csi-hostpathplugin-5hscv" [1a623c59-1923-4797-938a-e91c66b49cb9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0725 18:31:33.245834  437934 system_pods.go:61] "etcd-addons-673848" [55207b3b-0d45-4363-a476-ea2ee2d88fc9] Running
	I0725 18:31:33.245855  437934 system_pods.go:61] "kindnet-hf2mz" [84394c83-89c9-49e7-8119-ef2201a0f962] Running
	I0725 18:31:33.245874  437934 system_pods.go:61] "kube-apiserver-addons-673848" [c78868c5-a14a-43c4-a062-f6abb802c831] Running
	I0725 18:31:33.245907  437934 system_pods.go:61] "kube-controller-manager-addons-673848" [1ff36380-ff67-4b6f-86aa-7ac705b7ee14] Running
	I0725 18:31:33.245925  437934 system_pods.go:61] "kube-ingress-dns-minikube" [83c3a2da-8d97-4e21-a977-99c9ccd6bc31] Running
	I0725 18:31:33.245949  437934 system_pods.go:61] "kube-proxy-csqkx" [d131a2b5-604d-4bbb-ad72-5969bca2e933] Running
	I0725 18:31:33.245987  437934 system_pods.go:61] "kube-scheduler-addons-673848" [b03d35f4-9527-4057-a9e8-687bf1c722bf] Running
	I0725 18:31:33.246019  437934 system_pods.go:61] "metrics-server-c59844bb4-zbh9n" [cd479902-1630-4c96-9478-bafeaf4649a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:31:33.246045  437934 system_pods.go:61] "nvidia-device-plugin-daemonset-fc76g" [8d3c3a9e-cce4-488b-9311-576d1c2f87f8] Running
	I0725 18:31:33.246066  437934 system_pods.go:61] "registry-656c9c8d9c-zfs2w" [ad4c3318-d6a0-4edb-802c-b2f86930c67b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0725 18:31:33.246101  437934 system_pods.go:61] "registry-proxy-bvs6l" [2d1b2ee7-db00-40ff-8469-b85d4ef39cea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0725 18:31:33.246130  437934 system_pods.go:61] "snapshot-controller-745499f584-5x8n5" [670dba32-61db-4e88-a2f8-9a0c9f4f9f9b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0725 18:31:33.246152  437934 system_pods.go:61] "snapshot-controller-745499f584-sjnfs" [5d541ef7-c35a-4d39-8273-3e274257975b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0725 18:31:33.246186  437934 system_pods.go:61] "storage-provisioner" [7c4dff9d-8e21-4f47-adc2-ac772d2a8bc5] Running
	I0725 18:31:33.246219  437934 system_pods.go:74] duration metric: took 171.765045ms to wait for pod list to return data ...
	I0725 18:31:33.246245  437934 default_sa.go:34] waiting for default service account to be created ...
	I0725 18:31:33.410336  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:33.416749  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:33.438579  437934 default_sa.go:45] found service account: "default"
	I0725 18:31:33.438611  437934 default_sa.go:55] duration metric: took 192.346057ms for default service account to be created ...
	I0725 18:31:33.438621  437934 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 18:31:33.589085  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:33.647850  437934 system_pods.go:86] 18 kube-system pods found
	I0725 18:31:33.647888  437934 system_pods.go:89] "coredns-7db6d8ff4d-g5k44" [75bfb1df-e1f0-4f40-9709-48a0a937e47c] Running
	I0725 18:31:33.647900  437934 system_pods.go:89] "csi-hostpath-attacher-0" [f00006a6-9598-43d9-a438-a5e575ef46b2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0725 18:31:33.647911  437934 system_pods.go:89] "csi-hostpath-resizer-0" [58e1fff6-9b7b-40da-b2e1-ce4be1821dba] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0725 18:31:33.647921  437934 system_pods.go:89] "csi-hostpathplugin-5hscv" [1a623c59-1923-4797-938a-e91c66b49cb9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0725 18:31:33.647926  437934 system_pods.go:89] "etcd-addons-673848" [55207b3b-0d45-4363-a476-ea2ee2d88fc9] Running
	I0725 18:31:33.647932  437934 system_pods.go:89] "kindnet-hf2mz" [84394c83-89c9-49e7-8119-ef2201a0f962] Running
	I0725 18:31:33.647941  437934 system_pods.go:89] "kube-apiserver-addons-673848" [c78868c5-a14a-43c4-a062-f6abb802c831] Running
	I0725 18:31:33.647947  437934 system_pods.go:89] "kube-controller-manager-addons-673848" [1ff36380-ff67-4b6f-86aa-7ac705b7ee14] Running
	I0725 18:31:33.647961  437934 system_pods.go:89] "kube-ingress-dns-minikube" [83c3a2da-8d97-4e21-a977-99c9ccd6bc31] Running
	I0725 18:31:33.647965  437934 system_pods.go:89] "kube-proxy-csqkx" [d131a2b5-604d-4bbb-ad72-5969bca2e933] Running
	I0725 18:31:33.647970  437934 system_pods.go:89] "kube-scheduler-addons-673848" [b03d35f4-9527-4057-a9e8-687bf1c722bf] Running
	I0725 18:31:33.647982  437934 system_pods.go:89] "metrics-server-c59844bb4-zbh9n" [cd479902-1630-4c96-9478-bafeaf4649a1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0725 18:31:33.647987  437934 system_pods.go:89] "nvidia-device-plugin-daemonset-fc76g" [8d3c3a9e-cce4-488b-9311-576d1c2f87f8] Running
	I0725 18:31:33.647994  437934 system_pods.go:89] "registry-656c9c8d9c-zfs2w" [ad4c3318-d6a0-4edb-802c-b2f86930c67b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0725 18:31:33.648004  437934 system_pods.go:89] "registry-proxy-bvs6l" [2d1b2ee7-db00-40ff-8469-b85d4ef39cea] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0725 18:31:33.648011  437934 system_pods.go:89] "snapshot-controller-745499f584-5x8n5" [670dba32-61db-4e88-a2f8-9a0c9f4f9f9b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0725 18:31:33.648018  437934 system_pods.go:89] "snapshot-controller-745499f584-sjnfs" [5d541ef7-c35a-4d39-8273-3e274257975b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0725 18:31:33.648024  437934 system_pods.go:89] "storage-provisioner" [7c4dff9d-8e21-4f47-adc2-ac772d2a8bc5] Running
	I0725 18:31:33.648035  437934 system_pods.go:126] duration metric: took 209.407862ms to wait for k8s-apps to be running ...
	I0725 18:31:33.648047  437934 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 18:31:33.648109  437934 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:31:33.664126  437934 system_svc.go:56] duration metric: took 16.067487ms WaitForService to wait for kubelet
	I0725 18:31:33.664205  437934 kubeadm.go:582] duration metric: took 29.92860726s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 18:31:33.664247  437934 node_conditions.go:102] verifying NodePressure condition ...
	I0725 18:31:33.847685  437934 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0725 18:31:33.847770  437934 node_conditions.go:123] node cpu capacity is 2
	I0725 18:31:33.847797  437934 node_conditions.go:105] duration metric: took 183.512396ms to run NodePressure ...
	I0725 18:31:33.847822  437934 start.go:241] waiting for startup goroutines ...
	I0725 18:31:33.916235  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:33.920231  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:34.088793  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:34.411372  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:34.420038  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:34.589445  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:34.909339  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:34.917729  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:35.090784  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:35.409651  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:35.416794  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:35.589133  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:35.910640  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:35.918042  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:36.092434  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:36.412100  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:36.417096  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:36.591915  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:36.908352  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:36.916559  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:37.089039  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:37.428033  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:37.439354  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:37.590296  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:37.910567  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:37.918801  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:38.092190  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:38.410571  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:38.419619  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:38.589239  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:38.909232  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:38.917437  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:39.090311  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:39.408710  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:39.417251  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:39.590496  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:39.909133  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:39.917972  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:40.090769  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:40.410249  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:40.420206  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:40.589446  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:40.910309  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:40.918044  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:41.090485  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:41.408685  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0725 18:31:41.416414  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:41.590620  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:41.911433  437934 kapi.go:107] duration metric: took 27.50752188s to wait for kubernetes.io/minikube-addons=registry ...
	I0725 18:31:41.923342  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:42.092710  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:42.417501  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:42.588767  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:42.917349  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:43.092161  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:43.417294  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:43.591935  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:43.917348  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:44.092180  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:44.417091  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:44.589786  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:44.917425  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:45.101525  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:45.419687  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:45.588814  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:45.926556  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:46.089500  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:46.417438  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:46.589865  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:46.916969  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:47.090812  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:47.416571  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:47.588823  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:47.916559  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:48.090258  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:48.417166  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:48.591219  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:48.916719  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:49.089130  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:49.417833  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:49.589475  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:49.916570  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:50.090499  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:50.416883  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:50.590093  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:50.916616  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:51.090863  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:51.418698  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:51.589495  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:51.919993  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:52.090962  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:52.417590  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:52.591548  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:52.916434  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:53.089075  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:53.422473  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:53.603015  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:53.917568  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:54.090197  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:54.419101  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:54.640010  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:54.917846  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:55.091533  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:55.476688  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:55.593666  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:55.917451  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:56.093168  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:56.417299  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:56.591203  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:56.916581  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:57.090294  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:57.417514  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:57.590278  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:57.929501  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:58.089828  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:58.416805  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:58.593529  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:58.917732  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:59.089120  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:59.416788  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:31:59.590309  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:31:59.916840  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:00.106973  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:32:00.437812  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:00.595096  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:32:00.916575  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:01.089391  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:32:01.417657  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:01.589405  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:32:01.916639  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:02.089056  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:32:02.417537  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:02.588550  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:32:02.916606  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:03.091061  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:32:03.417072  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:03.589970  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:32:03.917522  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:04.089459  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:32:04.416137  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:04.589413  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:32:04.917329  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:05.090124  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:32:05.418506  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:05.590766  437934 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0725 18:32:05.918380  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:06.091624  437934 kapi.go:107] duration metric: took 53.007236405s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0725 18:32:06.419083  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:06.918389  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:07.417154  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:07.918039  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:08.417136  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:08.916510  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:09.422594  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:09.925478  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:10.417020  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:10.919267  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:11.417122  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:11.917677  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:12.416606  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:12.917034  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:13.417369  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0725 18:32:13.916991  437934 kapi.go:107] duration metric: took 58.50597076s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0725 18:32:39.691670  437934 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0725 18:32:39.691693  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:40.190833  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:40.691706  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:41.191510  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:41.691810  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:42.192423  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:42.691072  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:43.190698  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:43.691354  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:44.190467  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:44.691198  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:45.191254  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:45.690619  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:46.191593  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:46.691164  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:47.190553  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:47.691349  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:48.191825  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:48.690697  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:49.191467  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:49.690587  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:50.191251  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:50.691032  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:51.191665  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:51.691694  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:52.191003  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:52.690474  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:53.191146  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:53.691364  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:54.191261  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:54.690402  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:55.191153  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:55.690859  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:56.190851  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:56.690600  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:57.191193  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:57.690918  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:58.190377  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:58.690658  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:59.191290  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:32:59.690794  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:00.198175  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:00.691216  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:01.192246  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:01.691149  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:02.190616  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:02.691480  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:03.190976  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:03.690762  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:04.191526  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:04.691193  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:05.191002  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:05.690829  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:06.191217  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:06.690908  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:07.191045  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:07.690634  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:08.191880  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:08.691142  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:09.190760  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:09.692110  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:10.191389  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:10.691004  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:11.191849  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:11.692132  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:12.190802  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:12.691618  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:13.191399  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:13.691085  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:14.191486  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:14.691151  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:15.191113  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:15.691367  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:16.191258  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:16.691408  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:17.191276  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:17.690774  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:18.191197  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:18.691165  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:19.190629  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:19.692062  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:20.192142  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:20.690796  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:21.192096  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:21.690825  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:22.191431  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:22.691424  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:23.191504  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:23.691793  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:24.192600  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:24.693167  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:25.190754  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:25.691689  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:26.191327  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:26.691546  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:27.191486  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:27.690891  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:28.190805  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:28.691171  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:29.191834  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:29.693248  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:30.191593  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:30.691164  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:31.191217  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:31.691878  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:32.191932  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:32.691020  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:33.190920  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:33.691353  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:34.191280  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:34.690861  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:35.191972  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:35.692081  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:36.191308  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:36.691215  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:37.191770  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:37.691843  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:38.191009  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:38.691073  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:39.191082  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:39.692706  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:40.192542  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:40.691401  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:41.192118  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:41.691116  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:42.191705  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:42.690687  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:43.191191  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:43.690761  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:44.191025  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:44.692314  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:45.192205  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:45.690868  437934 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0725 18:33:46.191385  437934 kapi.go:107] duration metric: took 2m29.50400153s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0725 18:33:46.193484  437934 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-673848 cluster.
	I0725 18:33:46.196052  437934 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0725 18:33:46.197857  437934 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0725 18:33:46.200518  437934 out.go:177] * Enabled addons: storage-provisioner, ingress-dns, volcano, nvidia-device-plugin, cloud-spanner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0725 18:33:46.202969  437934 addons.go:510] duration metric: took 2m42.466865661s for enable addons: enabled=[storage-provisioner ingress-dns volcano nvidia-device-plugin cloud-spanner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0725 18:33:46.203022  437934 start.go:246] waiting for cluster config update ...
	I0725 18:33:46.203044  437934 start.go:255] writing updated cluster config ...
	I0725 18:33:46.203391  437934 ssh_runner.go:195] Run: rm -f paused
	I0725 18:33:46.552165  437934 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 18:33:46.555010  437934 out.go:177] * Done! kubectl is now configured to use "addons-673848" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	1dbba43345a71       d1ca868ab82aa       2 minutes ago       Exited              gadget                                   5                   09d0ad6b1973b       gadget-8wplf
	ddbea505242ed       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   9eef1767360fd       gcp-auth-5db96cd9b4-4cwkr
	d6f021108387e       8b46b1cd48760       4 minutes ago       Running             admission                                0                   3c553ced86dc0       volcano-admission-5f7844f7bc-wrhg7
	0b27d6aa897ef       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   62358ae434417       csi-hostpathplugin-5hscv
	55163f73578f2       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   62358ae434417       csi-hostpathplugin-5hscv
	b204a123c619a       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   62358ae434417       csi-hostpathplugin-5hscv
	dd6785d19382a       08f6b2990811a       4 minutes ago       Running             hostpath                                 0                   62358ae434417       csi-hostpathplugin-5hscv
	3122dd786b4fb       0107d56dbc0be       4 minutes ago       Running             node-driver-registrar                    0                   62358ae434417       csi-hostpathplugin-5hscv
	7bdbac117047d       24f8f979639f1       4 minutes ago       Running             controller                               0                   c869f0591d1fc       ingress-nginx-controller-6d9bd977d4-c2lcw
	4fce5f0c03d86       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   bfff40b90a68b       csi-hostpath-resizer-0
	33ca880e0a393       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   14f2255d4de35       csi-hostpath-attacher-0
	c8ef6aeeb61ba       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   62358ae434417       csi-hostpathplugin-5hscv
	3e054b570b728       296b5f799fcd8       5 minutes ago       Exited              patch                                    2                   b5e7834333770       ingress-nginx-admission-patch-dtq96
	fb903c0516b44       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   bf6a097c7a242       volcano-controllers-59cb4746db-hjrm4
	2555fea27ab51       95dccb4df54ab       5 minutes ago       Running             metrics-server                           0                   2033bf9901377       metrics-server-c59844bb4-zbh9n
	c91087401364c       77bdba588b953       5 minutes ago       Running             yakd                                     0                   73caa59f132d3       yakd-dashboard-799879c74f-qr7zr
	7f2692c9bd6e2       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   2e8c209d5082a       snapshot-controller-745499f584-5x8n5
	23f29ac0601c0       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   beea77d9487b7       volcano-scheduler-844f6db89b-48f4d
	ba050446ddf43       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   8dc0acb27fcfc       snapshot-controller-745499f584-sjnfs
	6c981bd6fec3f       296b5f799fcd8       5 minutes ago       Exited              create                                   0                   cf58699fe11f9       ingress-nginx-admission-create-bl6jb
	aa7e4962308ca       40bd730847e7e       5 minutes ago       Running             registry                                 0                   460218378fece       registry-656c9c8d9c-zfs2w
	ea35409e86727       8f3fc47ac1fb3       5 minutes ago       Running             cloud-spanner-emulator                   0                   6744162f80780       cloud-spanner-emulator-6fcd4f6f98-ckrzh
	56ec286ba12d9       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   42977ba004d88       registry-proxy-bvs6l
	ba2fe95c1b79c       2437cf7621777       5 minutes ago       Running             coredns                                  0                   78fffebfe41bc       coredns-7db6d8ff4d-g5k44
	2160d5487993f       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   19bbcc9cf6f55       local-path-provisioner-8d985888d-j54wm
	fc4d3124c6592       b644f4c9bf9c7       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   17e0968f775f1       nvidia-device-plugin-daemonset-fc76g
	0dc4c7cdf919d       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   583d9d00907ed       kube-ingress-dns-minikube
	46c16ce635c1c       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   1b039dd739eba       storage-provisioner
	e5c3acb417e93       f42786f8afd22       5 minutes ago       Running             kindnet-cni                              0                   5cf522b8cf42a       kindnet-hf2mz
	40434b8add502       2351f570ed0ea       5 minutes ago       Running             kube-proxy                               0                   120cc76ef0ad3       kube-proxy-csqkx
	15ccf2dd3823c       61773190d42ff       6 minutes ago       Running             kube-apiserver                           0                   fa80bd7e5ceaa       kube-apiserver-addons-673848
	8112591193aa7       8e97cdb19e7cc       6 minutes ago       Running             kube-controller-manager                  0                   26aa1202f33b3       kube-controller-manager-addons-673848
	5a8e71535b611       d48f992a22722       6 minutes ago       Running             kube-scheduler                           0                   2754e3b7a2b67       kube-scheduler-addons-673848
	2ff89edd076cd       014faa467e297       6 minutes ago       Running             etcd                                     0                   bc73cf67ac38a       etcd-addons-673848
	
	
	==> containerd <==
	Jul 25 18:34:51 addons-673848 containerd[811]: time="2024-07-25T18:34:51.133408104Z" level=info msg="Forcibly stopping sandbox \"cb501bf272c256f8550732bdeb6ab12880f9d6cea7957bcd45efeaf7bf91ad4e\""
	Jul 25 18:34:51 addons-673848 containerd[811]: time="2024-07-25T18:34:51.156973414Z" level=info msg="TearDown network for sandbox \"cb501bf272c256f8550732bdeb6ab12880f9d6cea7957bcd45efeaf7bf91ad4e\" successfully"
	Jul 25 18:34:51 addons-673848 containerd[811]: time="2024-07-25T18:34:51.163160802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb501bf272c256f8550732bdeb6ab12880f9d6cea7957bcd45efeaf7bf91ad4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Jul 25 18:34:51 addons-673848 containerd[811]: time="2024-07-25T18:34:51.163290669Z" level=info msg="RemovePodSandbox \"cb501bf272c256f8550732bdeb6ab12880f9d6cea7957bcd45efeaf7bf91ad4e\" returns successfully"
	Jul 25 18:34:55 addons-673848 containerd[811]: time="2024-07-25T18:34:55.971684309Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734\""
	Jul 25 18:34:56 addons-673848 containerd[811]: time="2024-07-25T18:34:56.099384844Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jul 25 18:34:56 addons-673848 containerd[811]: time="2024-07-25T18:34:56.101092675Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734: active requests=0, bytes read=89"
	Jul 25 18:34:56 addons-673848 containerd[811]: time="2024-07-25T18:34:56.104753737Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734\" with image id \"sha256:d1ca868ab82aa865a5f7b689c320359f3e31172de7b93dd0107fe2e49e617eeb\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734\", size \"73046218\" in 133.018394ms"
	Jul 25 18:34:56 addons-673848 containerd[811]: time="2024-07-25T18:34:56.104809629Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734\" returns image reference \"sha256:d1ca868ab82aa865a5f7b689c320359f3e31172de7b93dd0107fe2e49e617eeb\""
	Jul 25 18:34:56 addons-673848 containerd[811]: time="2024-07-25T18:34:56.107674079Z" level=info msg="CreateContainer within sandbox \"09d0ad6b1973b02bbc33b3e1dd50569cd68590bc1406c9a9a6756dc6a3cbef4d\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Jul 25 18:34:56 addons-673848 containerd[811]: time="2024-07-25T18:34:56.131959173Z" level=info msg="CreateContainer within sandbox \"09d0ad6b1973b02bbc33b3e1dd50569cd68590bc1406c9a9a6756dc6a3cbef4d\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa\""
	Jul 25 18:34:56 addons-673848 containerd[811]: time="2024-07-25T18:34:56.133021468Z" level=info msg="StartContainer for \"1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa\""
	Jul 25 18:34:56 addons-673848 containerd[811]: time="2024-07-25T18:34:56.184327001Z" level=info msg="StartContainer for \"1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa\" returns successfully"
	Jul 25 18:34:57 addons-673848 containerd[811]: time="2024-07-25T18:34:57.303636247Z" level=error msg="ExecSync for \"1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa\" failed" error="failed to exec in container: failed to start exec \"d809a5a6f9f76b0f79cafecb2da79e6545e4c94983a10806c62dde4c0e996a85\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Jul 25 18:34:57 addons-673848 containerd[811]: time="2024-07-25T18:34:57.316760477Z" level=error msg="ExecSync for \"1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa\" failed" error="failed to exec in container: failed to start exec \"80a660917fd3e9610b8b2c70d984c4c92361270a0fe77ce5d51e662e8762edc8\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Jul 25 18:34:57 addons-673848 containerd[811]: time="2024-07-25T18:34:57.328632205Z" level=error msg="ExecSync for \"1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa\" failed" error="failed to exec in container: failed to start exec \"893598d93746ae80e8f695a3b6ec8e31582aa7d4f2a46517f1ab8f924d055c31\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Jul 25 18:34:57 addons-673848 containerd[811]: time="2024-07-25T18:34:57.339453281Z" level=error msg="ttrpc: received message on inactive stream" stream=177
	Jul 25 18:34:57 addons-673848 containerd[811]: time="2024-07-25T18:34:57.341267513Z" level=error msg="ExecSync for \"1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa\" failed" error="failed to exec in container: failed to start exec \"fc3c3c74f89f21a8f00aa529fff0514df9a00d47dee7050b4cd65e48e40b3871\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Jul 25 18:34:57 addons-673848 containerd[811]: time="2024-07-25T18:34:57.352801249Z" level=error msg="ExecSync for \"1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa\" failed" error="failed to exec in container: failed to start exec \"2a903d18767fb8700ddf6f95ca4cbc76a52de7ea5384f32bc66e5342fdae623d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Jul 25 18:34:57 addons-673848 containerd[811]: time="2024-07-25T18:34:57.367506700Z" level=error msg="ExecSync for \"1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa\" failed" error="failed to exec in container: failed to start exec \"70494c014f15f3f426ae5d4f81cd6d4ce6e8ae553fbd532835090412080956d8\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Jul 25 18:34:57 addons-673848 containerd[811]: time="2024-07-25T18:34:57.454222961Z" level=info msg="shim disconnected" id=1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa namespace=k8s.io
	Jul 25 18:34:57 addons-673848 containerd[811]: time="2024-07-25T18:34:57.454263502Z" level=warning msg="cleaning up after shim disconnected" id=1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa namespace=k8s.io
	Jul 25 18:34:57 addons-673848 containerd[811]: time="2024-07-25T18:34:57.454339832Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jul 25 18:34:58 addons-673848 containerd[811]: time="2024-07-25T18:34:58.232879806Z" level=info msg="RemoveContainer for \"489d80feb487f4f14b4ed1c0e1eb793822482042ba0411819201e7da7d150ec7\""
	Jul 25 18:34:58 addons-673848 containerd[811]: time="2024-07-25T18:34:58.239087216Z" level=info msg="RemoveContainer for \"489d80feb487f4f14b4ed1c0e1eb793822482042ba0411819201e7da7d150ec7\" returns successfully"
	
	
	==> coredns [ba2fe95c1b79ccdd4c056286a6ff12245f92b0172bc79da86742b79f8418767b] <==
	[INFO] 10.244.0.4:57367 - 26657 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012063s
	[INFO] 10.244.0.4:57958 - 56687 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001662252s
	[INFO] 10.244.0.4:57958 - 30802 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001763099s
	[INFO] 10.244.0.4:43040 - 50093 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000075929s
	[INFO] 10.244.0.4:43040 - 35759 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000092453s
	[INFO] 10.244.0.4:56374 - 7595 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000102398s
	[INFO] 10.244.0.4:56374 - 30113 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00005659s
	[INFO] 10.244.0.4:44705 - 31895 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000058115s
	[INFO] 10.244.0.4:44705 - 58516 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000060044s
	[INFO] 10.244.0.4:48502 - 55381 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000126168s
	[INFO] 10.244.0.4:48502 - 56663 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072179s
	[INFO] 10.244.0.4:44541 - 26319 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001525508s
	[INFO] 10.244.0.4:44541 - 2861 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001555924s
	[INFO] 10.244.0.4:44102 - 27444 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000094472s
	[INFO] 10.244.0.4:44102 - 42033 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000168497s
	[INFO] 10.244.0.24:42038 - 6748 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000204181s
	[INFO] 10.244.0.24:58456 - 59515 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000164428s
	[INFO] 10.244.0.24:37837 - 18139 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000082739s
	[INFO] 10.244.0.24:43863 - 30891 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000057722s
	[INFO] 10.244.0.24:40803 - 61621 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102906s
	[INFO] 10.244.0.24:57803 - 64402 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000060298s
	[INFO] 10.244.0.24:46261 - 5441 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002277477s
	[INFO] 10.244.0.24:33215 - 20481 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001947638s
	[INFO] 10.244.0.24:53236 - 62861 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001066137s
	[INFO] 10.244.0.24:35260 - 43844 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000826207s
	
	
	==> describe nodes <==
	Name:               addons-673848
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-673848
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=addons-673848
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T18_30_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-673848
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-673848"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 18:30:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-673848
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 18:36:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 18:33:54 +0000   Thu, 25 Jul 2024 18:30:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 18:33:54 +0000   Thu, 25 Jul 2024 18:30:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 18:33:54 +0000   Thu, 25 Jul 2024 18:30:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 18:33:54 +0000   Thu, 25 Jul 2024 18:30:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-673848
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 cbd08d7e26d44abd9371653b34a49439
	  System UUID:                eded4eba-134e-4c9b-9db5-cc705ddcf922
	  Boot ID:                    6208173b-d514-4152-b2e9-119a649e8fe8
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.19
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-ckrzh      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  gadget                      gadget-8wplf                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  gcp-auth                    gcp-auth-5db96cd9b4-4cwkr                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  ingress-nginx               ingress-nginx-controller-6d9bd977d4-c2lcw    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         5m53s
	  kube-system                 coredns-7db6d8ff4d-g5k44                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     6m1s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 csi-hostpathplugin-5hscv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m50s
	  kube-system                 etcd-addons-673848                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         6m14s
	  kube-system                 kindnet-hf2mz                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      6m1s
	  kube-system                 kube-apiserver-addons-673848                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-controller-manager-addons-673848        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-proxy-csqkx                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m1s
	  kube-system                 kube-scheduler-addons-673848                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 metrics-server-c59844bb4-zbh9n               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m55s
	  kube-system                 nvidia-device-plugin-daemonset-fc76g         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 registry-656c9c8d9c-zfs2w                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 registry-proxy-bvs6l                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  kube-system                 snapshot-controller-745499f584-5x8n5         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 snapshot-controller-745499f584-sjnfs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  local-path-storage          local-path-provisioner-8d985888d-j54wm       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m56s
	  volcano-system              volcano-admission-5f7844f7bc-wrhg7           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  volcano-system              volcano-controllers-59cb4746db-hjrm4         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m52s
	  volcano-system              volcano-scheduler-844f6db89b-48f4d           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m51s
	  yakd-dashboard              yakd-dashboard-799879c74f-qr7zr              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     5m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  100m (5%!)(MISSING)
	  memory             638Mi (8%!)(MISSING)   476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 5m59s  kube-proxy       
	  Normal  Starting                 6m15s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m14s  kubelet          Node addons-673848 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m14s  kubelet          Node addons-673848 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m14s  kubelet          Node addons-673848 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             6m14s  kubelet          Node addons-673848 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  6m14s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                6m14s  kubelet          Node addons-673848 status is now: NodeReady
	  Normal  RegisteredNode           6m2s   node-controller  Node addons-673848 event: Registered Node addons-673848 in Controller
	
	
	==> dmesg <==
	[  +0.000702] FS-Cache: N-cookie c=0000000c [p=00000003 fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=0000000057ab999f{9p.inode} n=0000000098b5299d
	[  +0.001052] FS-Cache: N-key=[8] '896ced0000000000'
	[  +0.004584] FS-Cache: Duplicate cookie detected
	[  +0.000729] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000959] FS-Cache: O-cookie d=0000000057ab999f{9p.inode} n=00000000c2dd551e
	[  +0.001044] FS-Cache: O-key=[8] '896ced0000000000'
	[  +0.000701] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=0000000057ab999f{9p.inode} n=000000004200c04f
	[  +0.001031] FS-Cache: N-key=[8] '896ced0000000000'
	[  +2.491462] FS-Cache: Duplicate cookie detected
	[  +0.000712] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000995] FS-Cache: O-cookie d=0000000057ab999f{9p.inode} n=00000000188abc53
	[  +0.001041] FS-Cache: O-key=[8] '886ced0000000000'
	[  +0.000726] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.000929] FS-Cache: N-cookie d=0000000057ab999f{9p.inode} n=0000000098b5299d
	[  +0.001019] FS-Cache: N-key=[8] '886ced0000000000'
	[  +0.334809] FS-Cache: Duplicate cookie detected
	[  +0.000691] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.000958] FS-Cache: O-cookie d=0000000057ab999f{9p.inode} n=0000000077409bcc
	[  +0.001023] FS-Cache: O-key=[8] '926ced0000000000'
	[  +0.000739] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.000927] FS-Cache: N-cookie d=0000000057ab999f{9p.inode} n=000000002edece3a
	[  +0.001020] FS-Cache: N-key=[8] '926ced0000000000'
	[Jul25 18:01] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [2ff89edd076cdb597ab68a802177d674ff9cffc85482befde03bed66128331ac] <==
	{"level":"info","ts":"2024-07-25T18:30:44.721974Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-25T18:30:44.722191Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-25T18:30:44.722226Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-25T18:30:44.722267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-07-25T18:30:44.722377Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-07-25T18:30:44.722451Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-25T18:30:44.722462Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-25T18:30:45.305685Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-25T18:30:45.305866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-25T18:30:45.305914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-07-25T18:30:45.305979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-07-25T18:30:45.306029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-25T18:30:45.306081Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-25T18:30:45.306137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-25T18:30:45.311501Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:30:45.312287Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-673848 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-25T18:30:45.312366Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:30:45.313599Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:30:45.313834Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:30:45.314033Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-25T18:30:45.31242Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-25T18:30:45.312841Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-25T18:30:45.319051Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-25T18:30:45.320799Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-25T18:30:45.400129Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [ddbea505242edd739f52bc8cf6fab11ca681bbdda4454233845822e77acbcdad] <==
	2024/07/25 18:33:45 GCP Auth Webhook started!
	2024/07/25 18:34:03 Ready to marshal response ...
	2024/07/25 18:34:03 Ready to write response ...
	2024/07/25 18:34:03 Ready to marshal response ...
	2024/07/25 18:34:03 Ready to write response ...
	
	
	==> kernel <==
	 18:37:05 up  2:19,  0 users,  load average: 0.16, 1.04, 2.50
	Linux addons-673848 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [e5c3acb417e93a30d5f77cfb88c303f4bcb00e5d965395273316b17a70e082fb] <==
	E0725 18:35:49.922019       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0725 18:35:57.991484       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0725 18:35:57.991519       1 main.go:299] handling current node
	W0725 18:36:01.463787       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0725 18:36:01.463845       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0725 18:36:07.991864       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0725 18:36:07.991902       1 main.go:299] handling current node
	I0725 18:36:17.991704       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0725 18:36:17.991759       1 main.go:299] handling current node
	W0725 18:36:24.017181       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0725 18:36:24.017218       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0725 18:36:27.991920       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0725 18:36:27.991959       1 main.go:299] handling current node
	W0725 18:36:31.418251       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 18:36:31.418295       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 18:36:32.697806       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0725 18:36:32.697844       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0725 18:36:37.992315       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0725 18:36:37.992356       1 main.go:299] handling current node
	I0725 18:36:47.991785       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0725 18:36:47.991974       1 main.go:299] handling current node
	I0725 18:36:57.992357       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0725 18:36:57.992389       1 main.go:299] handling current node
	W0725 18:36:58.352108       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0725 18:36:58.352152       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kube-apiserver [15ccf2dd3823cd10f56ad6f8f0e85c03efbfd5c9bd1b1930c4245d2c995a7499] <==
	W0725 18:32:13.626581       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:14.726867       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:15.804950       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:16.853680       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:17.956209       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:19.055176       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:19.631875       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.198.155:443: connect: connection refused
	E0725 18:32:19.631914       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.198.155:443: connect: connection refused
	W0725 18:32:19.632389       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:19.699685       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.198.155:443: connect: connection refused
	E0725 18:32:19.699736       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.198.155:443: connect: connection refused
	W0725 18:32:19.700161       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:20.102431       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:21.154629       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:22.184400       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:23.221324       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:24.240603       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.102.119.126:443: connect: connection refused
	W0725 18:32:39.513430       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.198.155:443: connect: connection refused
	E0725 18:32:39.513532       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.198.155:443: connect: connection refused
	W0725 18:33:19.640028       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.198.155:443: connect: connection refused
	E0725 18:33:19.640070       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.198.155:443: connect: connection refused
	W0725 18:33:19.704897       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.198.155:443: connect: connection refused
	E0725 18:33:19.704941       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.198.155:443: connect: connection refused
	I0725 18:34:03.093712       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0725 18:34:03.135153       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [8112591193aa7ee562678c2deac3ef0ac87dcdd6680110a2c8bc9b2693cf2566] <==
	I0725 18:33:19.666221       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0725 18:33:19.679161       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0725 18:33:19.712517       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0725 18:33:19.723309       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0725 18:33:19.724460       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0725 18:33:19.740848       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0725 18:33:20.915870       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0725 18:33:20.929474       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0725 18:33:22.050502       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0725 18:33:22.075349       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0725 18:33:22.928599       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0725 18:33:22.941093       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0725 18:33:23.056293       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0725 18:33:23.067873       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0725 18:33:23.076942       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0725 18:33:23.083819       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0725 18:33:23.092047       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0725 18:33:23.100825       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0725 18:33:46.021236       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="16.37775ms"
	I0725 18:33:46.022748       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="30.515µs"
	I0725 18:33:53.021829       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0725 18:33:53.024282       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0725 18:33:53.076149       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0725 18:33:53.078652       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0725 18:34:02.818192       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init"
	
	
	==> kube-proxy [40434b8add502bdd28c51dd54a3764200ce2dce0a0658d3f4e29f8aaac9062e8] <==
	I0725 18:31:05.341580       1 server_linux.go:69] "Using iptables proxy"
	I0725 18:31:05.367076       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0725 18:31:05.407105       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0725 18:31:05.407155       1 server_linux.go:165] "Using iptables Proxier"
	I0725 18:31:05.412004       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0725 18:31:05.412045       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0725 18:31:05.412070       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0725 18:31:05.412278       1 server.go:872] "Version info" version="v1.30.3"
	I0725 18:31:05.412295       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0725 18:31:05.422928       1 config.go:192] "Starting service config controller"
	I0725 18:31:05.422973       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0725 18:31:05.422994       1 config.go:101] "Starting endpoint slice config controller"
	I0725 18:31:05.422998       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0725 18:31:05.430286       1 config.go:319] "Starting node config controller"
	I0725 18:31:05.430308       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0725 18:31:05.523347       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0725 18:31:05.523420       1 shared_informer.go:320] Caches are synced for service config
	I0725 18:31:05.530687       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [5a8e71535b6119d29dff9f11655980658a375fd0730f6d5c0161455420abadd8] <==
	E0725 18:30:48.760318       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 18:30:48.755971       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 18:30:48.760551       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0725 18:30:48.756041       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 18:30:48.760735       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0725 18:30:48.756093       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 18:30:48.761366       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0725 18:30:48.759871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 18:30:48.761548       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0725 18:30:48.759947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 18:30:48.761729       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 18:30:48.759960       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0725 18:30:48.762078       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 18:30:48.762180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0725 18:30:48.763072       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 18:30:48.767083       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0725 18:30:48.770824       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0725 18:30:48.775848       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0725 18:30:48.776205       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0725 18:30:48.776302       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0725 18:30:48.776468       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 18:30:48.776556       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0725 18:30:48.776712       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 18:30:48.776803       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0725 18:30:50.216704       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 25 18:35:16 addons-673848 kubelet[1552]: E0725 18:35:16.970705    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-8wplf_gadget(09601151-f9a5-45fa-98d3-18de784f9cde)\"" pod="gadget/gadget-8wplf" podUID="09601151-f9a5-45fa-98d3-18de784f9cde"
	Jul 25 18:35:25 addons-673848 kubelet[1552]: I0725 18:35:25.970404    1552 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-fc76g" secret="" err="secret \"gcp-auth\" not found"
	Jul 25 18:35:27 addons-673848 kubelet[1552]: I0725 18:35:27.970262    1552 scope.go:117] "RemoveContainer" containerID="1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa"
	Jul 25 18:35:27 addons-673848 kubelet[1552]: E0725 18:35:27.971245    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-8wplf_gadget(09601151-f9a5-45fa-98d3-18de784f9cde)\"" pod="gadget/gadget-8wplf" podUID="09601151-f9a5-45fa-98d3-18de784f9cde"
	Jul 25 18:35:28 addons-673848 kubelet[1552]: I0725 18:35:28.970718    1552 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-656c9c8d9c-zfs2w" secret="" err="secret \"gcp-auth\" not found"
	Jul 25 18:35:31 addons-673848 kubelet[1552]: I0725 18:35:31.969947    1552 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7db6d8ff4d-g5k44" secret="" err="secret \"gcp-auth\" not found"
	Jul 25 18:35:42 addons-673848 kubelet[1552]: I0725 18:35:42.970223    1552 scope.go:117] "RemoveContainer" containerID="1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa"
	Jul 25 18:35:42 addons-673848 kubelet[1552]: E0725 18:35:42.971190    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-8wplf_gadget(09601151-f9a5-45fa-98d3-18de784f9cde)\"" pod="gadget/gadget-8wplf" podUID="09601151-f9a5-45fa-98d3-18de784f9cde"
	Jul 25 18:35:42 addons-673848 kubelet[1552]: I0725 18:35:42.971926    1552 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-bvs6l" secret="" err="secret \"gcp-auth\" not found"
	Jul 25 18:35:54 addons-673848 kubelet[1552]: I0725 18:35:54.970140    1552 scope.go:117] "RemoveContainer" containerID="1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa"
	Jul 25 18:35:54 addons-673848 kubelet[1552]: E0725 18:35:54.970655    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-8wplf_gadget(09601151-f9a5-45fa-98d3-18de784f9cde)\"" pod="gadget/gadget-8wplf" podUID="09601151-f9a5-45fa-98d3-18de784f9cde"
	Jul 25 18:36:09 addons-673848 kubelet[1552]: I0725 18:36:09.969691    1552 scope.go:117] "RemoveContainer" containerID="1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa"
	Jul 25 18:36:09 addons-673848 kubelet[1552]: E0725 18:36:09.970684    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-8wplf_gadget(09601151-f9a5-45fa-98d3-18de784f9cde)\"" pod="gadget/gadget-8wplf" podUID="09601151-f9a5-45fa-98d3-18de784f9cde"
	Jul 25 18:36:22 addons-673848 kubelet[1552]: I0725 18:36:22.972074    1552 scope.go:117] "RemoveContainer" containerID="1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa"
	Jul 25 18:36:22 addons-673848 kubelet[1552]: E0725 18:36:22.972552    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-8wplf_gadget(09601151-f9a5-45fa-98d3-18de784f9cde)\"" pod="gadget/gadget-8wplf" podUID="09601151-f9a5-45fa-98d3-18de784f9cde"
	Jul 25 18:36:28 addons-673848 kubelet[1552]: I0725 18:36:28.970460    1552 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-fc76g" secret="" err="secret \"gcp-auth\" not found"
	Jul 25 18:36:33 addons-673848 kubelet[1552]: I0725 18:36:33.970464    1552 scope.go:117] "RemoveContainer" containerID="1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa"
	Jul 25 18:36:33 addons-673848 kubelet[1552]: E0725 18:36:33.971432    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-8wplf_gadget(09601151-f9a5-45fa-98d3-18de784f9cde)\"" pod="gadget/gadget-8wplf" podUID="09601151-f9a5-45fa-98d3-18de784f9cde"
	Jul 25 18:36:45 addons-673848 kubelet[1552]: I0725 18:36:45.970070    1552 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-656c9c8d9c-zfs2w" secret="" err="secret \"gcp-auth\" not found"
	Jul 25 18:36:45 addons-673848 kubelet[1552]: I0725 18:36:45.970104    1552 scope.go:117] "RemoveContainer" containerID="1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa"
	Jul 25 18:36:45 addons-673848 kubelet[1552]: E0725 18:36:45.971381    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-8wplf_gadget(09601151-f9a5-45fa-98d3-18de784f9cde)\"" pod="gadget/gadget-8wplf" podUID="09601151-f9a5-45fa-98d3-18de784f9cde"
	Jul 25 18:36:53 addons-673848 kubelet[1552]: I0725 18:36:53.969790    1552 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7db6d8ff4d-g5k44" secret="" err="secret \"gcp-auth\" not found"
	Jul 25 18:36:57 addons-673848 kubelet[1552]: I0725 18:36:57.970246    1552 scope.go:117] "RemoveContainer" containerID="1dbba43345a71265de700345599ce704defa41f82d056e345d0943dba3afdefa"
	Jul 25 18:36:57 addons-673848 kubelet[1552]: E0725 18:36:57.970778    1552 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-8wplf_gadget(09601151-f9a5-45fa-98d3-18de784f9cde)\"" pod="gadget/gadget-8wplf" podUID="09601151-f9a5-45fa-98d3-18de784f9cde"
	Jul 25 18:37:02 addons-673848 kubelet[1552]: I0725 18:37:02.970711    1552 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-bvs6l" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [46c16ce635c1cf3447eb34be0d31e09a0e59aa11cc943768af7ef641d1f36a9c] <==
	I0725 18:31:09.772294       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 18:31:09.810491       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 18:31:09.810607       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 18:31:09.826742       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 18:31:09.830338       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d6894ae2-a7b6-4c3e-b9fe-393ca0642a20", APIVersion:"v1", ResourceVersion:"526", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-673848_21c0c0bd-0659-47ba-9d8d-7719cf427896 became leader
	I0725 18:31:09.830377       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-673848_21c0c0bd-0659-47ba-9d8d-7719cf427896!
	I0725 18:31:09.931099       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-673848_21c0c0bd-0659-47ba-9d8d-7719cf427896!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-673848 -n addons-673848
helpers_test.go:261: (dbg) Run:  kubectl --context addons-673848 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-bl6jb ingress-nginx-admission-patch-dtq96 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-673848 describe pod ingress-nginx-admission-create-bl6jb ingress-nginx-admission-patch-dtq96 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-673848 describe pod ingress-nginx-admission-create-bl6jb ingress-nginx-admission-patch-dtq96 test-job-nginx-0: exit status 1 (84.152827ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bl6jb" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-dtq96" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-673848 describe pod ingress-nginx-admission-create-bl6jb ingress-nginx-admission-patch-dtq96 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (377.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-262689 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-262689 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m14.260094053s)

                                                
                                                
-- stdout --
	* [old-k8s-version-262689] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-262689" primary control-plane node in "old-k8s-version-262689" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Restarting existing docker container for "old-k8s-version-262689" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.19 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-262689 addons enable metrics-server
	
	* Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 19:28:16.781932  688921 out.go:291] Setting OutFile to fd 1 ...
	I0725 19:28:16.782122  688921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:28:16.782151  688921 out.go:304] Setting ErrFile to fd 2...
	I0725 19:28:16.782171  688921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:28:16.782476  688921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	I0725 19:28:16.782862  688921 out.go:298] Setting JSON to false
	I0725 19:28:16.783949  688921 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11446,"bootTime":1721924251,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0725 19:28:16.784048  688921 start.go:139] virtualization:  
	I0725 19:28:16.786539  688921 out.go:177] * [old-k8s-version-262689] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0725 19:28:16.789162  688921 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 19:28:16.789336  688921 notify.go:220] Checking for updates...
	I0725 19:28:16.793393  688921 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 19:28:16.795348  688921 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 19:28:16.797182  688921 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	I0725 19:28:16.799134  688921 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0725 19:28:16.801258  688921 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 19:28:16.803535  688921 config.go:182] Loaded profile config "old-k8s-version-262689": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0725 19:28:16.805808  688921 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0725 19:28:16.807557  688921 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 19:28:16.840472  688921 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0725 19:28:16.840617  688921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 19:28:16.899684  688921 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-25 19:28:16.888866993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 19:28:16.899810  688921 docker.go:307] overlay module found
	I0725 19:28:16.903099  688921 out.go:177] * Using the docker driver based on existing profile
	I0725 19:28:16.904738  688921 start.go:297] selected driver: docker
	I0725 19:28:16.904757  688921 start.go:901] validating driver "docker" against &{Name:old-k8s-version-262689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-262689 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:28:16.904867  688921 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 19:28:16.905536  688921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 19:28:16.979605  688921 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-25 19:28:16.970283159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 19:28:16.980010  688921 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 19:28:16.980058  688921 cni.go:84] Creating CNI manager for ""
	I0725 19:28:16.980071  688921 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0725 19:28:16.980123  688921 start.go:340] cluster config:
	{Name:old-k8s-version-262689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-262689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:28:16.982399  688921 out.go:177] * Starting "old-k8s-version-262689" primary control-plane node in "old-k8s-version-262689" cluster
	I0725 19:28:16.984340  688921 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0725 19:28:16.986093  688921 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0725 19:28:16.987887  688921 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0725 19:28:16.987941  688921 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0725 19:28:16.987954  688921 cache.go:56] Caching tarball of preloaded images
	I0725 19:28:16.987956  688921 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0725 19:28:16.988035  688921 preload.go:172] Found /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 19:28:16.988044  688921 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0725 19:28:16.988158  688921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/config.json ...
	W0725 19:28:17.011489  688921 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0725 19:28:17.011511  688921 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0725 19:28:17.011590  688921 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0725 19:28:17.011621  688921 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0725 19:28:17.011629  688921 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0725 19:28:17.011638  688921 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0725 19:28:17.011643  688921 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0725 19:28:17.198300  688921 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0725 19:28:17.198380  688921 cache.go:194] Successfully downloaded all kic artifacts
	I0725 19:28:17.198423  688921 start.go:360] acquireMachinesLock for old-k8s-version-262689: {Name:mk52d59fe6ca7e9ea9e4daeb7fea024956b4cbea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 19:28:17.198511  688921 start.go:364] duration metric: took 55.309µs to acquireMachinesLock for "old-k8s-version-262689"
	I0725 19:28:17.198536  688921 start.go:96] Skipping create...Using existing machine configuration
	I0725 19:28:17.198545  688921 fix.go:54] fixHost starting: 
	I0725 19:28:17.198825  688921 cli_runner.go:164] Run: docker container inspect old-k8s-version-262689 --format={{.State.Status}}
	I0725 19:28:17.215468  688921 fix.go:112] recreateIfNeeded on old-k8s-version-262689: state=Stopped err=<nil>
	W0725 19:28:17.215506  688921 fix.go:138] unexpected machine state, will restart: <nil>
	I0725 19:28:17.218201  688921 out.go:177] * Restarting existing docker container for "old-k8s-version-262689" ...
	I0725 19:28:17.220270  688921 cli_runner.go:164] Run: docker start old-k8s-version-262689
	I0725 19:28:17.531594  688921 cli_runner.go:164] Run: docker container inspect old-k8s-version-262689 --format={{.State.Status}}
	I0725 19:28:17.558480  688921 kic.go:430] container "old-k8s-version-262689" state is running.
	I0725 19:28:17.561171  688921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-262689
	I0725 19:28:17.584061  688921 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/config.json ...
	I0725 19:28:17.584283  688921 machine.go:94] provisionDockerMachine start ...
	I0725 19:28:17.584353  688921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-262689
	I0725 19:28:17.612330  688921 main.go:141] libmachine: Using SSH client type: native
	I0725 19:28:17.612743  688921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33496 <nil> <nil>}
	I0725 19:28:17.612756  688921 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 19:28:17.613319  688921 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58630->127.0.0.1:33496: read: connection reset by peer
	I0725 19:28:20.778842  688921 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-262689
	
	I0725 19:28:20.778875  688921 ubuntu.go:169] provisioning hostname "old-k8s-version-262689"
	I0725 19:28:20.779003  688921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-262689
	I0725 19:28:20.800334  688921 main.go:141] libmachine: Using SSH client type: native
	I0725 19:28:20.800596  688921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33496 <nil> <nil>}
	I0725 19:28:20.800613  688921 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-262689 && echo "old-k8s-version-262689" | sudo tee /etc/hostname
	I0725 19:28:20.952370  688921 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-262689
	
	I0725 19:28:20.952449  688921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-262689
	I0725 19:28:20.972122  688921 main.go:141] libmachine: Using SSH client type: native
	I0725 19:28:20.972415  688921 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33496 <nil> <nil>}
	I0725 19:28:20.972433  688921 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-262689' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-262689/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-262689' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 19:28:21.119030  688921 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 19:28:21.119109  688921 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19326-431487/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-431487/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-431487/.minikube}
	I0725 19:28:21.119152  688921 ubuntu.go:177] setting up certificates
	I0725 19:28:21.119163  688921 provision.go:84] configureAuth start
	I0725 19:28:21.119231  688921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-262689
	I0725 19:28:21.136800  688921 provision.go:143] copyHostCerts
	I0725 19:28:21.136873  688921 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-431487/.minikube/ca.pem, removing ...
	I0725 19:28:21.136890  688921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-431487/.minikube/ca.pem
	I0725 19:28:21.136966  688921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-431487/.minikube/ca.pem (1082 bytes)
	I0725 19:28:21.137070  688921 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-431487/.minikube/cert.pem, removing ...
	I0725 19:28:21.137080  688921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-431487/.minikube/cert.pem
	I0725 19:28:21.137106  688921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-431487/.minikube/cert.pem (1123 bytes)
	I0725 19:28:21.137164  688921 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-431487/.minikube/key.pem, removing ...
	I0725 19:28:21.137173  688921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-431487/.minikube/key.pem
	I0725 19:28:21.137197  688921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-431487/.minikube/key.pem (1679 bytes)
	I0725 19:28:21.137250  688921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-431487/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-262689 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-262689]
	I0725 19:28:21.556411  688921 provision.go:177] copyRemoteCerts
	I0725 19:28:21.556483  688921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 19:28:21.556532  688921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-262689
	I0725 19:28:21.574810  688921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/old-k8s-version-262689/id_rsa Username:docker}
	I0725 19:28:21.672235  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 19:28:21.697894  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0725 19:28:21.722590  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0725 19:28:21.748096  688921 provision.go:87] duration metric: took 628.919694ms to configureAuth
	I0725 19:28:21.748123  688921 ubuntu.go:193] setting minikube options for container-runtime
	I0725 19:28:21.748328  688921 config.go:182] Loaded profile config "old-k8s-version-262689": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0725 19:28:21.748335  688921 machine.go:97] duration metric: took 4.164044592s to provisionDockerMachine
	I0725 19:28:21.748343  688921 start.go:293] postStartSetup for "old-k8s-version-262689" (driver="docker")
	I0725 19:28:21.748354  688921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 19:28:21.748408  688921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 19:28:21.748447  688921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-262689
	I0725 19:28:21.765175  688921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/old-k8s-version-262689/id_rsa Username:docker}
	I0725 19:28:21.860149  688921 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 19:28:21.863462  688921 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 19:28:21.863497  688921 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 19:28:21.863508  688921 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 19:28:21.863516  688921 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0725 19:28:21.863529  688921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-431487/.minikube/addons for local assets ...
	I0725 19:28:21.863598  688921 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-431487/.minikube/files for local assets ...
	I0725 19:28:21.863689  688921 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-431487/.minikube/files/etc/ssl/certs/4368932.pem -> 4368932.pem in /etc/ssl/certs
	I0725 19:28:21.863798  688921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 19:28:21.872378  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/files/etc/ssl/certs/4368932.pem --> /etc/ssl/certs/4368932.pem (1708 bytes)
	I0725 19:28:21.897064  688921 start.go:296] duration metric: took 148.705696ms for postStartSetup
	I0725 19:28:21.897162  688921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 19:28:21.897213  688921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-262689
	I0725 19:28:21.914083  688921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/old-k8s-version-262689/id_rsa Username:docker}
	I0725 19:28:22.005476  688921 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 19:28:22.011775  688921 fix.go:56] duration metric: took 4.813219299s for fixHost
	I0725 19:28:22.011805  688921 start.go:83] releasing machines lock for "old-k8s-version-262689", held for 4.813282379s
	I0725 19:28:22.011888  688921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-262689
	I0725 19:28:22.029451  688921 ssh_runner.go:195] Run: cat /version.json
	I0725 19:28:22.029523  688921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-262689
	I0725 19:28:22.029800  688921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 19:28:22.029876  688921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-262689
	I0725 19:28:22.051181  688921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/old-k8s-version-262689/id_rsa Username:docker}
	I0725 19:28:22.055319  688921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/old-k8s-version-262689/id_rsa Username:docker}
	I0725 19:28:22.279939  688921 ssh_runner.go:195] Run: systemctl --version
	I0725 19:28:22.284521  688921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 19:28:22.288879  688921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0725 19:28:22.307468  688921 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0725 19:28:22.307577  688921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 19:28:22.317035  688921 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0725 19:28:22.317071  688921 start.go:495] detecting cgroup driver to use...
	I0725 19:28:22.317104  688921 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0725 19:28:22.317155  688921 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 19:28:22.331072  688921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 19:28:22.343398  688921 docker.go:217] disabling cri-docker service (if available) ...
	I0725 19:28:22.343489  688921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 19:28:22.357167  688921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 19:28:22.368811  688921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 19:28:22.462260  688921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 19:28:22.544348  688921 docker.go:233] disabling docker service ...
	I0725 19:28:22.544428  688921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 19:28:22.556764  688921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 19:28:22.568299  688921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 19:28:22.647176  688921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 19:28:22.728262  688921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 19:28:22.740212  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 19:28:22.757196  688921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0725 19:28:22.768753  688921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 19:28:22.778693  688921 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 19:28:22.778779  688921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 19:28:22.789367  688921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 19:28:22.799803  688921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 19:28:22.809741  688921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 19:28:22.819795  688921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 19:28:22.829369  688921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 19:28:22.839530  688921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 19:28:22.848357  688921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 19:28:22.856953  688921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:28:22.943881  688921 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 19:28:23.120624  688921 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0725 19:28:23.120708  688921 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0725 19:28:23.125252  688921 start.go:563] Will wait 60s for crictl version
	I0725 19:28:23.125332  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:28:23.129711  688921 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 19:28:23.170378  688921 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0725 19:28:23.170480  688921 ssh_runner.go:195] Run: containerd --version
	I0725 19:28:23.196798  688921 ssh_runner.go:195] Run: containerd --version
	I0725 19:28:23.226703  688921 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.19 ...
	I0725 19:28:23.228377  688921 cli_runner.go:164] Run: docker network inspect old-k8s-version-262689 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 19:28:23.244122  688921 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0725 19:28:23.247719  688921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:28:23.258886  688921 kubeadm.go:883] updating cluster {Name:old-k8s-version-262689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-262689 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 19:28:23.259036  688921 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0725 19:28:23.259099  688921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:28:23.297863  688921 containerd.go:627] all images are preloaded for containerd runtime.
	I0725 19:28:23.297889  688921 containerd.go:534] Images already preloaded, skipping extraction
	I0725 19:28:23.297951  688921 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:28:23.334709  688921 containerd.go:627] all images are preloaded for containerd runtime.
	I0725 19:28:23.334732  688921 cache_images.go:84] Images are preloaded, skipping loading
	I0725 19:28:23.334741  688921 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I0725 19:28:23.334854  688921 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-262689 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-262689 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 19:28:23.334965  688921 ssh_runner.go:195] Run: sudo crictl info
	I0725 19:28:23.377347  688921 cni.go:84] Creating CNI manager for ""
	I0725 19:28:23.377376  688921 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0725 19:28:23.377390  688921 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 19:28:23.377414  688921 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-262689 NodeName:old-k8s-version-262689 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0725 19:28:23.377550  688921 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-262689"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 19:28:23.377626  688921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0725 19:28:23.387456  688921 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 19:28:23.387588  688921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 19:28:23.396709  688921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0725 19:28:23.416069  688921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 19:28:23.434474  688921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0725 19:28:23.453325  688921 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0725 19:28:23.456851  688921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:28:23.468292  688921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:28:23.561090  688921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:28:23.575265  688921 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689 for IP: 192.168.85.2
	I0725 19:28:23.575329  688921 certs.go:194] generating shared ca certs ...
	I0725 19:28:23.575379  688921 certs.go:226] acquiring lock for ca certs: {Name:mk41d7b1e7cb52699a093c81e00768f54d73ad8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:28:23.575587  688921 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-431487/.minikube/ca.key
	I0725 19:28:23.575680  688921 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.key
	I0725 19:28:23.575713  688921 certs.go:256] generating profile certs ...
	I0725 19:28:23.575834  688921 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.key
	I0725 19:28:23.576632  688921 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/apiserver.key.bc2b7a55
	I0725 19:28:23.577315  688921 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/proxy-client.key
	I0725 19:28:23.577498  688921 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/436893.pem (1338 bytes)
	W0725 19:28:23.577597  688921 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-431487/.minikube/certs/436893_empty.pem, impossibly tiny 0 bytes
	I0725 19:28:23.577636  688921 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca-key.pem (1675 bytes)
	I0725 19:28:23.577693  688921 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem (1082 bytes)
	I0725 19:28:23.577744  688921 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/cert.pem (1123 bytes)
	I0725 19:28:23.577797  688921 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/key.pem (1679 bytes)
	I0725 19:28:23.577872  688921 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/files/etc/ssl/certs/4368932.pem (1708 bytes)
	I0725 19:28:23.578548  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 19:28:23.613381  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 19:28:23.638752  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 19:28:23.668297  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 19:28:23.697194  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0725 19:28:23.728342  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0725 19:28:23.758289  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 19:28:23.790378  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0725 19:28:23.816130  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/files/etc/ssl/certs/4368932.pem --> /usr/share/ca-certificates/4368932.pem (1708 bytes)
	I0725 19:28:23.841445  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 19:28:23.867182  688921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/certs/436893.pem --> /usr/share/ca-certificates/436893.pem (1338 bytes)
	I0725 19:28:23.891951  688921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 19:28:23.910425  688921 ssh_runner.go:195] Run: openssl version
	I0725 19:28:23.915944  688921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4368932.pem && ln -fs /usr/share/ca-certificates/4368932.pem /etc/ssl/certs/4368932.pem"
	I0725 19:28:23.925736  688921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4368932.pem
	I0725 19:28:23.929384  688921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 18:40 /usr/share/ca-certificates/4368932.pem
	I0725 19:28:23.929470  688921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4368932.pem
	I0725 19:28:23.936486  688921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4368932.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 19:28:23.945613  688921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 19:28:23.955221  688921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:28:23.958826  688921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:28:23.959032  688921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:28:23.966322  688921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 19:28:23.975731  688921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/436893.pem && ln -fs /usr/share/ca-certificates/436893.pem /etc/ssl/certs/436893.pem"
	I0725 19:28:23.985325  688921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/436893.pem
	I0725 19:28:23.988971  688921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 18:40 /usr/share/ca-certificates/436893.pem
	I0725 19:28:23.989062  688921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/436893.pem
	I0725 19:28:23.996226  688921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/436893.pem /etc/ssl/certs/51391683.0"
	I0725 19:28:24.007920  688921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 19:28:24.013285  688921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0725 19:28:24.021426  688921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0725 19:28:24.029383  688921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0725 19:28:24.037453  688921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0725 19:28:24.045450  688921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0725 19:28:24.053091  688921 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0725 19:28:24.060745  688921 kubeadm.go:392] StartCluster: {Name:old-k8s-version-262689 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-262689 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:28:24.060849  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0725 19:28:24.060928  688921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 19:28:24.118182  688921 cri.go:89] found id: "6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8"
	I0725 19:28:24.118212  688921 cri.go:89] found id: "687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8"
	I0725 19:28:24.118217  688921 cri.go:89] found id: "1d9b398db1989c14a18312ca23045842653811f0be4341afde62a30b1e42aa41"
	I0725 19:28:24.118221  688921 cri.go:89] found id: "5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f"
	I0725 19:28:24.118224  688921 cri.go:89] found id: "2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34"
	I0725 19:28:24.118227  688921 cri.go:89] found id: "092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb"
	I0725 19:28:24.118231  688921 cri.go:89] found id: "a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8"
	I0725 19:28:24.118234  688921 cri.go:89] found id: "bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6"
	I0725 19:28:24.118237  688921 cri.go:89] found id: ""
	I0725 19:28:24.118298  688921 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0725 19:28:24.131472  688921 cri.go:116] JSON = null
	W0725 19:28:24.131574  688921 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0725 19:28:24.131678  688921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 19:28:24.141383  688921 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0725 19:28:24.141404  688921 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0725 19:28:24.141484  688921 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0725 19:28:24.151055  688921 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0725 19:28:24.151747  688921 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-262689" does not appear in /home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 19:28:24.152015  688921 kubeconfig.go:62] /home/jenkins/minikube-integration/19326-431487/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-262689" cluster setting kubeconfig missing "old-k8s-version-262689" context setting]
	I0725 19:28:24.152551  688921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/kubeconfig: {Name:mk3cdfe1101bbbc0f7441d92cff5cd6b29ee3404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:28:24.153979  688921 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0725 19:28:24.165735  688921 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0725 19:28:24.165767  688921 kubeadm.go:597] duration metric: took 24.356597ms to restartPrimaryControlPlane
	I0725 19:28:24.165776  688921 kubeadm.go:394] duration metric: took 105.04483ms to StartCluster
	I0725 19:28:24.165791  688921 settings.go:142] acquiring lock: {Name:mk69edff96840eebb76289a50cf78daf601fe5de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:28:24.165849  688921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 19:28:24.166744  688921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/kubeconfig: {Name:mk3cdfe1101bbbc0f7441d92cff5cd6b29ee3404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:28:24.167046  688921 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0725 19:28:24.167322  688921 config.go:182] Loaded profile config "old-k8s-version-262689": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0725 19:28:24.167399  688921 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 19:28:24.167545  688921 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-262689"
	I0725 19:28:24.167585  688921 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-262689"
	W0725 19:28:24.167659  688921 addons.go:243] addon storage-provisioner should already be in state true
	I0725 19:28:24.167687  688921 host.go:66] Checking if "old-k8s-version-262689" exists ...
	I0725 19:28:24.167619  688921 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-262689"
	I0725 19:28:24.167773  688921 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-262689"
	I0725 19:28:24.168091  688921 cli_runner.go:164] Run: docker container inspect old-k8s-version-262689 --format={{.State.Status}}
	I0725 19:28:24.168109  688921 cli_runner.go:164] Run: docker container inspect old-k8s-version-262689 --format={{.State.Status}}
	I0725 19:28:24.167624  688921 addons.go:69] Setting dashboard=true in profile "old-k8s-version-262689"
	I0725 19:28:24.168577  688921 addons.go:234] Setting addon dashboard=true in "old-k8s-version-262689"
	W0725 19:28:24.168592  688921 addons.go:243] addon dashboard should already be in state true
	I0725 19:28:24.168624  688921 host.go:66] Checking if "old-k8s-version-262689" exists ...
	I0725 19:28:24.169026  688921 cli_runner.go:164] Run: docker container inspect old-k8s-version-262689 --format={{.State.Status}}
	I0725 19:28:24.172116  688921 out.go:177] * Verifying Kubernetes components...
	I0725 19:28:24.167629  688921 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-262689"
	I0725 19:28:24.172531  688921 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-262689"
	W0725 19:28:24.172546  688921 addons.go:243] addon metrics-server should already be in state true
	I0725 19:28:24.172579  688921 host.go:66] Checking if "old-k8s-version-262689" exists ...
	I0725 19:28:24.172991  688921 cli_runner.go:164] Run: docker container inspect old-k8s-version-262689 --format={{.State.Status}}
	I0725 19:28:24.180683  688921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:28:24.197765  688921 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-262689"
	W0725 19:28:24.197789  688921 addons.go:243] addon default-storageclass should already be in state true
	I0725 19:28:24.197815  688921 host.go:66] Checking if "old-k8s-version-262689" exists ...
	I0725 19:28:24.198200  688921 cli_runner.go:164] Run: docker container inspect old-k8s-version-262689 --format={{.State.Status}}
	I0725 19:28:24.222601  688921 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 19:28:24.222628  688921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 19:28:24.222695  688921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-262689
	I0725 19:28:24.225684  688921 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 19:28:24.227599  688921 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:28:24.227619  688921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 19:28:24.227688  688921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-262689
	I0725 19:28:24.253248  688921 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0725 19:28:24.253247  688921 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0725 19:28:24.255691  688921 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0725 19:28:24.255721  688921 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0725 19:28:24.255798  688921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-262689
	I0725 19:28:24.258004  688921 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0725 19:28:24.260144  688921 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0725 19:28:24.260170  688921 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0725 19:28:24.260239  688921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-262689
	I0725 19:28:24.304989  688921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/old-k8s-version-262689/id_rsa Username:docker}
	I0725 19:28:24.305478  688921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/old-k8s-version-262689/id_rsa Username:docker}
	I0725 19:28:24.313991  688921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/old-k8s-version-262689/id_rsa Username:docker}
	I0725 19:28:24.329077  688921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33496 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/old-k8s-version-262689/id_rsa Username:docker}
	I0725 19:28:24.340109  688921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:28:24.360695  688921 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-262689" to be "Ready" ...
	I0725 19:28:24.445495  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 19:28:24.453347  688921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0725 19:28:24.453372  688921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0725 19:28:24.485068  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:28:24.490093  688921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0725 19:28:24.490115  688921 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0725 19:28:24.502415  688921 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0725 19:28:24.502438  688921 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0725 19:28:24.564168  688921 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 19:28:24.564190  688921 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0725 19:28:24.566501  688921 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0725 19:28:24.566521  688921 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0725 19:28:24.605415  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 19:28:24.607905  688921 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0725 19:28:24.607926  688921 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0725 19:28:24.630747  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:24.630782  688921 retry.go:31] will retry after 283.280345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0725 19:28:24.650092  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:24.650125  688921 retry.go:31] will retry after 192.18969ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:24.660461  688921 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0725 19:28:24.660488  688921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0725 19:28:24.679476  688921 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0725 19:28:24.679502  688921 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0725 19:28:24.697789  688921 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0725 19:28:24.697811  688921 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0725 19:28:24.715737  688921 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0725 19:28:24.715759  688921 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0725 19:28:24.734059  688921 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0725 19:28:24.734082  688921 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	W0725 19:28:24.745766  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:24.745798  688921 retry.go:31] will retry after 280.658949ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:24.753238  688921 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 19:28:24.753309  688921 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0725 19:28:24.773744  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0725 19:28:24.842425  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:24.842500  688921 retry.go:31] will retry after 302.32861ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:24.842434  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:28:24.914535  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0725 19:28:24.921810  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:24.921883  688921 retry.go:31] will retry after 234.409766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0725 19:28:24.991177  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:24.991210  688921 retry.go:31] will retry after 445.227503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.027465  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0725 19:28:25.104088  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.104132  688921 retry.go:31] will retry after 533.613917ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.145375  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 19:28:25.156749  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0725 19:28:25.241267  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.241313  688921 retry.go:31] will retry after 313.63186ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0725 19:28:25.266537  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.266569  688921 retry.go:31] will retry after 460.201706ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.437549  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0725 19:28:25.515571  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.515604  688921 retry.go:31] will retry after 755.935876ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.555800  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0725 19:28:25.625591  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.625623  688921 retry.go:31] will retry after 785.377318ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.638743  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0725 19:28:25.707428  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.707460  688921 retry.go:31] will retry after 794.102193ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.727779  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0725 19:28:25.799931  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:25.799961  688921 retry.go:31] will retry after 973.230313ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:26.272278  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0725 19:28:26.347605  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:26.347678  688921 retry.go:31] will retry after 1.044600994s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:26.362263  688921 node_ready.go:53] error getting node "old-k8s-version-262689": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-262689": dial tcp 192.168.85.2:8443: connect: connection refused
	I0725 19:28:26.411633  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0725 19:28:26.485918  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:26.485958  688921 retry.go:31] will retry after 605.710532ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:26.502261  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0725 19:28:26.578560  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:26.578612  688921 retry.go:31] will retry after 1.152231704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:26.774379  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0725 19:28:26.844288  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:26.844325  688921 retry.go:31] will retry after 1.032868324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:27.092646  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0725 19:28:27.163796  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:27.163832  688921 retry.go:31] will retry after 1.499808843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:27.392679  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0725 19:28:27.464917  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:27.464947  688921 retry.go:31] will retry after 1.625870195s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:27.731183  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0725 19:28:27.802896  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:27.802951  688921 retry.go:31] will retry after 973.836717ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:27.877849  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0725 19:28:27.952942  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:27.953029  688921 retry.go:31] will retry after 1.860552888s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:28.664560  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0725 19:28:28.736880  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:28.736912  688921 retry.go:31] will retry after 1.347570963s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:28.777014  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0725 19:28:28.850737  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:28.850770  688921 retry.go:31] will retry after 2.379555503s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:28.861440  688921 node_ready.go:53] error getting node "old-k8s-version-262689": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-262689": dial tcp 192.168.85.2:8443: connect: connection refused
	I0725 19:28:29.091855  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0725 19:28:29.163574  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:29.163603  688921 retry.go:31] will retry after 2.118749439s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:29.813858  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0725 19:28:29.882682  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:29.882762  688921 retry.go:31] will retry after 2.509534704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:30.085229  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0725 19:28:30.187228  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:30.187267  688921 retry.go:31] will retry after 3.768348502s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:30.861981  688921 node_ready.go:53] error getting node "old-k8s-version-262689": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-262689": dial tcp 192.168.85.2:8443: connect: connection refused
	I0725 19:28:31.230564  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 19:28:31.283305  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0725 19:28:31.328879  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:31.328918  688921 retry.go:31] will retry after 2.575529832s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0725 19:28:31.413252  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:31.413281  688921 retry.go:31] will retry after 3.929044818s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:32.393288  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0725 19:28:32.505460  688921 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:32.505496  688921 retry.go:31] will retry after 4.058820604s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0725 19:28:32.862162  688921 node_ready.go:53] error getting node "old-k8s-version-262689": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-262689": dial tcp 192.168.85.2:8443: connect: connection refused
	I0725 19:28:33.905244  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0725 19:28:33.956668  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0725 19:28:35.342865  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0725 19:28:36.565177  688921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:28:41.940648  688921 node_ready.go:49] node "old-k8s-version-262689" has status "Ready":"True"
	I0725 19:28:41.940672  688921 node_ready.go:38] duration metric: took 17.579930551s for node "old-k8s-version-262689" to be "Ready" ...
	I0725 19:28:41.940681  688921 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:28:42.323112  688921 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-djgf4" in "kube-system" namespace to be "Ready" ...
	I0725 19:28:42.758916  688921 pod_ready.go:92] pod "coredns-74ff55c5b-djgf4" in "kube-system" namespace has status "Ready":"True"
	I0725 19:28:42.759006  688921 pod_ready.go:81] duration metric: took 435.805641ms for pod "coredns-74ff55c5b-djgf4" in "kube-system" namespace to be "Ready" ...
	I0725 19:28:42.759035  688921 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-262689" in "kube-system" namespace to be "Ready" ...
	I0725 19:28:42.961624  688921 pod_ready.go:92] pod "etcd-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"True"
	I0725 19:28:42.961697  688921 pod_ready.go:81] duration metric: took 202.639557ms for pod "etcd-old-k8s-version-262689" in "kube-system" namespace to be "Ready" ...
	I0725 19:28:42.961725  688921 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-262689" in "kube-system" namespace to be "Ready" ...
	I0725 19:28:43.118968  688921 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"True"
	I0725 19:28:43.119042  688921 pod_ready.go:81] duration metric: took 157.297064ms for pod "kube-apiserver-old-k8s-version-262689" in "kube-system" namespace to be "Ready" ...
	I0725 19:28:43.119071  688921 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-262689" in "kube-system" namespace to be "Ready" ...
	I0725 19:28:43.166152  688921 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"True"
	I0725 19:28:43.166228  688921 pod_ready.go:81] duration metric: took 47.124737ms for pod "kube-controller-manager-old-k8s-version-262689" in "kube-system" namespace to be "Ready" ...
	I0725 19:28:43.166254  688921 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-srbcv" in "kube-system" namespace to be "Ready" ...
	I0725 19:28:43.222129  688921 pod_ready.go:92] pod "kube-proxy-srbcv" in "kube-system" namespace has status "Ready":"True"
	I0725 19:28:43.222193  688921 pod_ready.go:81] duration metric: took 55.917971ms for pod "kube-proxy-srbcv" in "kube-system" namespace to be "Ready" ...
	I0725 19:28:43.222228  688921 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace to be "Ready" ...
	I0725 19:28:45.258604  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:28:45.862292  688921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (11.957008671s)
	I0725 19:28:45.862343  688921 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-262689"
	I0725 19:28:46.359586  688921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.402858335s)
	I0725 19:28:46.359846  688921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.016943916s)
	I0725 19:28:46.361896  688921 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-262689 addons enable metrics-server
	
	I0725 19:28:46.372528  688921 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.807315722s)
	I0725 19:28:46.377014  688921 out.go:177] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	I0725 19:28:46.378786  688921 addons.go:510] duration metric: took 22.211380884s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I0725 19:28:47.728957  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:28:50.243655  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:28:52.730611  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:28:55.228972  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:28:57.244538  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:28:59.735814  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:02.026448  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:04.229038  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:06.728468  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:08.729167  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:11.236067  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:13.729844  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:16.228242  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:18.228983  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:20.229832  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:22.252852  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:24.728531  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:26.728849  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:28.729367  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:31.228766  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:33.730049  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:35.730470  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:38.228977  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:40.229059  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:42.250447  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:44.728457  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:46.729106  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:48.730669  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:51.247704  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:53.728691  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:56.229029  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:29:58.729004  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:00.729425  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:03.229772  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:05.729634  688921 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:06.729150  688921 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace has status "Ready":"True"
	I0725 19:30:06.729179  688921 pod_ready.go:81] duration metric: took 1m23.506930201s for pod "kube-scheduler-old-k8s-version-262689" in "kube-system" namespace to be "Ready" ...
	I0725 19:30:06.729195  688921 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace to be "Ready" ...
	I0725 19:30:08.735776  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:10.736156  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:13.235326  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:15.235416  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:17.236004  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:19.242484  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:21.735211  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:23.735945  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:26.239215  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:28.735197  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:30.735933  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:32.779740  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:35.236193  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:37.735911  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:40.247025  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:42.735548  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:44.736880  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:47.235587  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:49.236663  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:51.735450  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:53.736551  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:56.235526  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:30:58.236093  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:00.306674  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:02.735417  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:04.735811  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:07.236266  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:09.735835  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:11.736351  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:14.236188  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:16.237316  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:18.737138  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:21.236225  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:23.735346  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:26.235191  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:28.239449  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:30.736010  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:33.281074  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:35.736132  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:37.736197  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:40.236564  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:42.237400  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:44.735933  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:47.235692  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:49.236477  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:51.735236  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:54.240405  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:56.736199  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:31:59.235148  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:01.237060  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:03.738340  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:06.236978  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:08.736271  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:10.739517  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:13.245022  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:15.735552  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:18.235721  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:20.236538  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:22.735744  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:25.235722  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:27.236640  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:29.736285  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:32.235366  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:34.236596  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:36.735534  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:38.736334  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:41.235939  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:43.734884  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:45.736417  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:48.235478  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:50.236445  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:52.736899  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:54.737092  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:57.236723  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:32:59.244812  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:01.740019  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:04.236083  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:06.735974  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:09.235848  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:11.235931  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:13.236135  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:15.236538  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:17.237479  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:19.735300  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:21.736670  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:24.269025  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:26.736978  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:29.235104  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:31.235396  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:33.236475  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:35.236629  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:37.237325  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:39.239015  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:41.737833  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:44.238271  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:46.737113  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:49.236415  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:51.736605  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:54.236461  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:56.735944  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:59.235650  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:01.736227  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:03.736798  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:06.235979  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:06.737497  688921 pod_ready.go:81] duration metric: took 4m0.008287476s for pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace to be "Ready" ...
	E0725 19:34:06.737526  688921 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 19:34:06.737535  688921 pod_ready.go:38] duration metric: took 5m24.79684369s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:34:06.737548  688921 api_server.go:52] waiting for apiserver process to appear ...
	I0725 19:34:06.737576  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0725 19:34:06.737650  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 19:34:06.807692  688921 cri.go:89] found id: "8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7"
	I0725 19:34:06.807723  688921 cri.go:89] found id: "a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8"
	I0725 19:34:06.807729  688921 cri.go:89] found id: ""
	I0725 19:34:06.807736  688921 logs.go:276] 2 containers: [8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7 a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8]
	I0725 19:34:06.807794  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:06.818733  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:06.822775  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0725 19:34:06.822852  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 19:34:06.888510  688921 cri.go:89] found id: "1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967"
	I0725 19:34:06.888542  688921 cri.go:89] found id: "bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6"
	I0725 19:34:06.888547  688921 cri.go:89] found id: ""
	I0725 19:34:06.888557  688921 logs.go:276] 2 containers: [1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967 bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6]
	I0725 19:34:06.888612  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:06.892832  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:06.897974  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0725 19:34:06.898094  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 19:34:06.959812  688921 cri.go:89] found id: "aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05"
	I0725 19:34:06.959838  688921 cri.go:89] found id: "6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8"
	I0725 19:34:06.959844  688921 cri.go:89] found id: ""
	I0725 19:34:06.959851  688921 logs.go:276] 2 containers: [aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05 6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8]
	I0725 19:34:06.959956  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:06.964200  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:06.968411  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0725 19:34:06.968494  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 19:34:07.033397  688921 cri.go:89] found id: "fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1"
	I0725 19:34:07.033436  688921 cri.go:89] found id: "2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34"
	I0725 19:34:07.033442  688921 cri.go:89] found id: ""
	I0725 19:34:07.033450  688921 logs.go:276] 2 containers: [fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1 2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34]
	I0725 19:34:07.033516  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.043063  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.048413  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0725 19:34:07.048505  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 19:34:07.111832  688921 cri.go:89] found id: "09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98"
	I0725 19:34:07.111912  688921 cri.go:89] found id: "5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f"
	I0725 19:34:07.111931  688921 cri.go:89] found id: ""
	I0725 19:34:07.111952  688921 logs.go:276] 2 containers: [09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98 5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f]
	I0725 19:34:07.112033  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.117008  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.121337  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 19:34:07.121457  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 19:34:07.187319  688921 cri.go:89] found id: "83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5"
	I0725 19:34:07.187384  688921 cri.go:89] found id: "092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb"
	I0725 19:34:07.187401  688921 cri.go:89] found id: ""
	I0725 19:34:07.187423  688921 logs.go:276] 2 containers: [83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5 092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb]
	I0725 19:34:07.187507  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.192267  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.196794  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0725 19:34:07.196914  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 19:34:07.271398  688921 cri.go:89] found id: "ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01"
	I0725 19:34:07.271471  688921 cri.go:89] found id: "687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8"
	I0725 19:34:07.271489  688921 cri.go:89] found id: ""
	I0725 19:34:07.271511  688921 logs.go:276] 2 containers: [ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01 687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8]
	I0725 19:34:07.271595  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.275933  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.281359  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0725 19:34:07.281487  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 19:34:07.377349  688921 cri.go:89] found id: "1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e"
	I0725 19:34:07.377413  688921 cri.go:89] found id: "6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985"
	I0725 19:34:07.377432  688921 cri.go:89] found id: ""
	I0725 19:34:07.377454  688921 logs.go:276] 2 containers: [1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e 6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985]
	I0725 19:34:07.377542  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.383264  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.388959  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 19:34:07.389087  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 19:34:07.461197  688921 cri.go:89] found id: "22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc"
	I0725 19:34:07.461216  688921 cri.go:89] found id: ""
	I0725 19:34:07.461223  688921 logs.go:276] 1 containers: [22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc]
	I0725 19:34:07.461279  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.465404  688921 logs.go:123] Gathering logs for storage-provisioner [1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e] ...
	I0725 19:34:07.465428  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e"
	I0725 19:34:07.547164  688921 logs.go:123] Gathering logs for container status ...
	I0725 19:34:07.547233  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 19:34:07.624426  688921 logs.go:123] Gathering logs for kubelet ...
	I0725 19:34:07.624507  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0725 19:34:07.703403  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:42 old-k8s-version-262689 kubelet[657]: E0725 19:28:42.036852     657 reflector.go:138] object-"kube-system"/"kindnet-token-vxqld": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vxqld" is forbidden: User "system:node:old-k8s-version-262689" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-262689' and this object
	W0725 19:34:07.703640  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:42 old-k8s-version-262689 kubelet[657]: E0725 19:28:42.037214     657 reflector.go:138] object-"kube-system"/"kube-proxy-token-2gxrg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2gxrg" is forbidden: User "system:node:old-k8s-version-262689" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-262689' and this object
	W0725 19:34:07.707435  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:45 old-k8s-version-262689 kubelet[657]: E0725 19:28:45.149554     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:07.707634  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:46 old-k8s-version-262689 kubelet[657]: E0725 19:28:46.003537     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.710803  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:57 old-k8s-version-262689 kubelet[657]: E0725 19:28:57.257602     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:07.713278  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:09 old-k8s-version-262689 kubelet[657]: E0725 19:29:09.369957     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.713674  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:10 old-k8s-version-262689 kubelet[657]: E0725 19:29:10.383955     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.713885  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:11 old-k8s-version-262689 kubelet[657]: E0725 19:29:11.215410     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.714611  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:16 old-k8s-version-262689 kubelet[657]: E0725 19:29:16.271158     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.715106  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:17 old-k8s-version-262689 kubelet[657]: E0725 19:29:17.406387     657 pod_workers.go:191] Error syncing pod 192a4c32-53cd-4ce7-ba80-a523469b645d ("storage-provisioner_kube-system(192a4c32-53cd-4ce7-ba80-a523469b645d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(192a4c32-53cd-4ce7-ba80-a523469b645d)"
	W0725 19:34:07.718311  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:26 old-k8s-version-262689 kubelet[657]: E0725 19:29:26.224127     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:07.719087  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:29 old-k8s-version-262689 kubelet[657]: E0725 19:29:29.440381     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.719629  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:36 old-k8s-version-262689 kubelet[657]: E0725 19:29:36.271666     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.719827  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:37 old-k8s-version-262689 kubelet[657]: E0725 19:29:37.215365     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.720195  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:51 old-k8s-version-262689 kubelet[657]: E0725 19:29:51.215710     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.720693  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:51 old-k8s-version-262689 kubelet[657]: E0725 19:29:51.524343     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.721057  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:56 old-k8s-version-262689 kubelet[657]: E0725 19:29:56.271047     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.721244  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:04 old-k8s-version-262689 kubelet[657]: E0725 19:30:04.215943     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.721572  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:11 old-k8s-version-262689 kubelet[657]: E0725 19:30:11.215709     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.724232  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:19 old-k8s-version-262689 kubelet[657]: E0725 19:30:19.231937     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:07.724588  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:25 old-k8s-version-262689 kubelet[657]: E0725 19:30:25.214643     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.724789  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:32 old-k8s-version-262689 kubelet[657]: E0725 19:30:32.216336     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.725458  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:40 old-k8s-version-262689 kubelet[657]: E0725 19:30:40.656202     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.725659  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:46 old-k8s-version-262689 kubelet[657]: E0725 19:30:46.215221     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.726057  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:46 old-k8s-version-262689 kubelet[657]: E0725 19:30:46.271828     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.726400  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:01 old-k8s-version-262689 kubelet[657]: E0725 19:31:01.215458     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.726633  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:01 old-k8s-version-262689 kubelet[657]: E0725 19:31:01.215611     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.726841  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:13 old-k8s-version-262689 kubelet[657]: E0725 19:31:13.215172     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.727236  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:14 old-k8s-version-262689 kubelet[657]: E0725 19:31:14.214851     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.727622  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:26 old-k8s-version-262689 kubelet[657]: E0725 19:31:26.215358     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.727848  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:27 old-k8s-version-262689 kubelet[657]: E0725 19:31:27.215041     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.728224  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:37 old-k8s-version-262689 kubelet[657]: E0725 19:31:37.215248     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.731097  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:40 old-k8s-version-262689 kubelet[657]: E0725 19:31:40.223791     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:07.731472  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:52 old-k8s-version-262689 kubelet[657]: E0725 19:31:52.217037     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.731662  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:54 old-k8s-version-262689 kubelet[657]: E0725 19:31:54.225403     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.731924  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:05 old-k8s-version-262689 kubelet[657]: E0725 19:32:05.215333     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.732602  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:07 old-k8s-version-262689 kubelet[657]: E0725 19:32:07.888105     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.732942  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:16 old-k8s-version-262689 kubelet[657]: E0725 19:32:16.271463     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.733127  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:19 old-k8s-version-262689 kubelet[657]: E0725 19:32:19.215018     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.733595  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:28 old-k8s-version-262689 kubelet[657]: E0725 19:32:28.214662     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.733787  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:31 old-k8s-version-262689 kubelet[657]: E0725 19:32:31.215144     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.734174  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:41 old-k8s-version-262689 kubelet[657]: E0725 19:32:41.214882     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.734408  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:46 old-k8s-version-262689 kubelet[657]: E0725 19:32:46.215198     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.734844  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:52 old-k8s-version-262689 kubelet[657]: E0725 19:32:52.215208     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.735085  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:59 old-k8s-version-262689 kubelet[657]: E0725 19:32:59.215146     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.735513  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:06 old-k8s-version-262689 kubelet[657]: E0725 19:33:06.214773     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.735733  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:12 old-k8s-version-262689 kubelet[657]: E0725 19:33:12.219911     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.736070  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:17 old-k8s-version-262689 kubelet[657]: E0725 19:33:17.215219     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.736255  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:25 old-k8s-version-262689 kubelet[657]: E0725 19:33:25.215021     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.736581  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:32 old-k8s-version-262689 kubelet[657]: E0725 19:33:32.216211     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.736764  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:38 old-k8s-version-262689 kubelet[657]: E0725 19:33:38.215054     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.737090  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:45 old-k8s-version-262689 kubelet[657]: E0725 19:33:45.216141     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.737377  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:50 old-k8s-version-262689 kubelet[657]: E0725 19:33:50.215161     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.737718  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:58 old-k8s-version-262689 kubelet[657]: E0725 19:33:58.214660     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.737917  688921 logs.go:138] Found kubelet problem: Jul 25 19:34:03 old-k8s-version-262689 kubelet[657]: E0725 19:34:03.215098     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0725 19:34:07.737927  688921 logs.go:123] Gathering logs for etcd [1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967] ...
	I0725 19:34:07.737943  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967"
	I0725 19:34:07.822664  688921 logs.go:123] Gathering logs for kube-scheduler [fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1] ...
	I0725 19:34:07.822740  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1"
	I0725 19:34:07.879095  688921 logs.go:123] Gathering logs for kindnet [ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01] ...
	I0725 19:34:07.879168  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01"
	I0725 19:34:07.955648  688921 logs.go:123] Gathering logs for kindnet [687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8] ...
	I0725 19:34:07.955730  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8"
	I0725 19:34:08.037629  688921 logs.go:123] Gathering logs for kube-apiserver [8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7] ...
	I0725 19:34:08.037704  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7"
	I0725 19:34:08.114974  688921 logs.go:123] Gathering logs for kube-scheduler [2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34] ...
	I0725 19:34:08.115040  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34"
	I0725 19:34:08.198695  688921 logs.go:123] Gathering logs for kube-controller-manager [83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5] ...
	I0725 19:34:08.198774  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5"
	I0725 19:34:08.332924  688921 logs.go:123] Gathering logs for kube-controller-manager [092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb] ...
	I0725 19:34:08.333002  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb"
	I0725 19:34:08.461781  688921 logs.go:123] Gathering logs for kube-proxy [5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f] ...
	I0725 19:34:08.461879  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f"
	I0725 19:34:08.540031  688921 logs.go:123] Gathering logs for storage-provisioner [6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985] ...
	I0725 19:34:08.540060  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985"
	I0725 19:34:08.583359  688921 logs.go:123] Gathering logs for containerd ...
	I0725 19:34:08.583437  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0725 19:34:08.659982  688921 logs.go:123] Gathering logs for dmesg ...
	I0725 19:34:08.660061  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 19:34:08.680183  688921 logs.go:123] Gathering logs for kube-apiserver [a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8] ...
	I0725 19:34:08.680216  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8"
	I0725 19:34:08.758790  688921 logs.go:123] Gathering logs for coredns [6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8] ...
	I0725 19:34:08.758823  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8"
	I0725 19:34:08.810366  688921 logs.go:123] Gathering logs for kube-proxy [09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98] ...
	I0725 19:34:08.810401  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98"
	I0725 19:34:08.855821  688921 logs.go:123] Gathering logs for describe nodes ...
	I0725 19:34:08.855850  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 19:34:09.049486  688921 logs.go:123] Gathering logs for etcd [bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6] ...
	I0725 19:34:09.049516  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6"
	I0725 19:34:09.101862  688921 logs.go:123] Gathering logs for coredns [aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05] ...
	I0725 19:34:09.101895  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05"
	I0725 19:34:09.150649  688921 logs.go:123] Gathering logs for kubernetes-dashboard [22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc] ...
	I0725 19:34:09.150683  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc"
	I0725 19:34:09.195050  688921 out.go:304] Setting ErrFile to fd 2...
	I0725 19:34:09.195080  688921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0725 19:34:09.195179  688921 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0725 19:34:09.195327  688921 out.go:239]   Jul 25 19:33:38 old-k8s-version-262689 kubelet[657]: E0725 19:33:38.215054     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 25 19:33:38 old-k8s-version-262689 kubelet[657]: E0725 19:33:38.215054     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:09.195345  688921 out.go:239]   Jul 25 19:33:45 old-k8s-version-262689 kubelet[657]: E0725 19:33:45.216141     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	  Jul 25 19:33:45 old-k8s-version-262689 kubelet[657]: E0725 19:33:45.216141     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:09.195424  688921 out.go:239]   Jul 25 19:33:50 old-k8s-version-262689 kubelet[657]: E0725 19:33:50.215161     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 25 19:33:50 old-k8s-version-262689 kubelet[657]: E0725 19:33:50.215161     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:09.195440  688921 out.go:239]   Jul 25 19:33:58 old-k8s-version-262689 kubelet[657]: E0725 19:33:58.214660     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	  Jul 25 19:33:58 old-k8s-version-262689 kubelet[657]: E0725 19:33:58.214660     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:09.195465  688921 out.go:239]   Jul 25 19:34:03 old-k8s-version-262689 kubelet[657]: E0725 19:34:03.215098     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 25 19:34:03 old-k8s-version-262689 kubelet[657]: E0725 19:34:03.215098     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0725 19:34:09.195473  688921 out.go:304] Setting ErrFile to fd 2...
	I0725 19:34:09.195484  688921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:34:19.196744  688921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 19:34:19.209410  688921 api_server.go:72] duration metric: took 5m55.042327442s to wait for apiserver process to appear ...
	I0725 19:34:19.209435  688921 api_server.go:88] waiting for apiserver healthz status ...
	I0725 19:34:19.209472  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0725 19:34:19.209531  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 19:34:19.247922  688921 cri.go:89] found id: "8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7"
	I0725 19:34:19.247942  688921 cri.go:89] found id: "a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8"
	I0725 19:34:19.247947  688921 cri.go:89] found id: ""
	I0725 19:34:19.247954  688921 logs.go:276] 2 containers: [8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7 a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8]
	I0725 19:34:19.248012  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.252109  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.255778  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0725 19:34:19.255850  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 19:34:19.295831  688921 cri.go:89] found id: "1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967"
	I0725 19:34:19.295853  688921 cri.go:89] found id: "bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6"
	I0725 19:34:19.295859  688921 cri.go:89] found id: ""
	I0725 19:34:19.295866  688921 logs.go:276] 2 containers: [1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967 bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6]
	I0725 19:34:19.295924  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.300077  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.303811  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0725 19:34:19.303883  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 19:34:19.349146  688921 cri.go:89] found id: "aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05"
	I0725 19:34:19.349170  688921 cri.go:89] found id: "6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8"
	I0725 19:34:19.349176  688921 cri.go:89] found id: ""
	I0725 19:34:19.349183  688921 logs.go:276] 2 containers: [aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05 6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8]
	I0725 19:34:19.349247  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.353162  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.356919  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0725 19:34:19.357009  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 19:34:19.396994  688921 cri.go:89] found id: "fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1"
	I0725 19:34:19.397014  688921 cri.go:89] found id: "2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34"
	I0725 19:34:19.397018  688921 cri.go:89] found id: ""
	I0725 19:34:19.397025  688921 logs.go:276] 2 containers: [fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1 2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34]
	I0725 19:34:19.397084  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.401204  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.405026  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0725 19:34:19.405158  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 19:34:19.446347  688921 cri.go:89] found id: "09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98"
	I0725 19:34:19.446370  688921 cri.go:89] found id: "5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f"
	I0725 19:34:19.446375  688921 cri.go:89] found id: ""
	I0725 19:34:19.446383  688921 logs.go:276] 2 containers: [09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98 5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f]
	I0725 19:34:19.446443  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.450086  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.453624  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 19:34:19.453731  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 19:34:19.492956  688921 cri.go:89] found id: "83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5"
	I0725 19:34:19.492981  688921 cri.go:89] found id: "092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb"
	I0725 19:34:19.492985  688921 cri.go:89] found id: ""
	I0725 19:34:19.492993  688921 logs.go:276] 2 containers: [83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5 092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb]
	I0725 19:34:19.493051  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.497081  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.500953  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0725 19:34:19.501052  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 19:34:19.546766  688921 cri.go:89] found id: "ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01"
	I0725 19:34:19.546791  688921 cri.go:89] found id: "687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8"
	I0725 19:34:19.546797  688921 cri.go:89] found id: ""
	I0725 19:34:19.546804  688921 logs.go:276] 2 containers: [ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01 687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8]
	I0725 19:34:19.546860  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.550701  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.554230  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0725 19:34:19.554305  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 19:34:19.594222  688921 cri.go:89] found id: "1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e"
	I0725 19:34:19.594256  688921 cri.go:89] found id: "6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985"
	I0725 19:34:19.594261  688921 cri.go:89] found id: ""
	I0725 19:34:19.594269  688921 logs.go:276] 2 containers: [1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e 6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985]
	I0725 19:34:19.594337  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.598412  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.602080  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 19:34:19.602161  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 19:34:19.643928  688921 cri.go:89] found id: "22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc"
	I0725 19:34:19.643955  688921 cri.go:89] found id: ""
	I0725 19:34:19.643963  688921 logs.go:276] 1 containers: [22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc]
	I0725 19:34:19.644027  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.647751  688921 logs.go:123] Gathering logs for container status ...
	I0725 19:34:19.647778  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 19:34:19.690394  688921 logs.go:123] Gathering logs for etcd [1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967] ...
	I0725 19:34:19.690510  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967"
	I0725 19:34:19.732117  688921 logs.go:123] Gathering logs for kube-proxy [09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98] ...
	I0725 19:34:19.732146  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98"
	I0725 19:34:19.780472  688921 logs.go:123] Gathering logs for kube-controller-manager [83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5] ...
	I0725 19:34:19.780499  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5"
	I0725 19:34:19.851807  688921 logs.go:123] Gathering logs for kindnet [687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8] ...
	I0725 19:34:19.851890  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8"
	I0725 19:34:19.909403  688921 logs.go:123] Gathering logs for storage-provisioner [1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e] ...
	I0725 19:34:19.909441  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e"
	I0725 19:34:19.947085  688921 logs.go:123] Gathering logs for kubernetes-dashboard [22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc] ...
	I0725 19:34:19.947112  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc"
	I0725 19:34:19.989416  688921 logs.go:123] Gathering logs for kindnet [ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01] ...
	I0725 19:34:19.989441  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01"
	I0725 19:34:20.080244  688921 logs.go:123] Gathering logs for containerd ...
	I0725 19:34:20.080284  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0725 19:34:20.148363  688921 logs.go:123] Gathering logs for kubelet ...
	I0725 19:34:20.148399  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0725 19:34:20.209971  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:42 old-k8s-version-262689 kubelet[657]: E0725 19:28:42.036852     657 reflector.go:138] object-"kube-system"/"kindnet-token-vxqld": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vxqld" is forbidden: User "system:node:old-k8s-version-262689" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-262689' and this object
	W0725 19:34:20.210213  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:42 old-k8s-version-262689 kubelet[657]: E0725 19:28:42.037214     657 reflector.go:138] object-"kube-system"/"kube-proxy-token-2gxrg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2gxrg" is forbidden: User "system:node:old-k8s-version-262689" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-262689' and this object
	W0725 19:34:20.213793  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:45 old-k8s-version-262689 kubelet[657]: E0725 19:28:45.149554     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:20.215174  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:46 old-k8s-version-262689 kubelet[657]: E0725 19:28:46.003537     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.218024  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:57 old-k8s-version-262689 kubelet[657]: E0725 19:28:57.257602     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:20.220231  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:09 old-k8s-version-262689 kubelet[657]: E0725 19:29:09.369957     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.220930  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:10 old-k8s-version-262689 kubelet[657]: E0725 19:29:10.383955     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.221254  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:11 old-k8s-version-262689 kubelet[657]: E0725 19:29:11.215410     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.222117  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:16 old-k8s-version-262689 kubelet[657]: E0725 19:29:16.271158     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.222720  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:17 old-k8s-version-262689 kubelet[657]: E0725 19:29:17.406387     657 pod_workers.go:191] Error syncing pod 192a4c32-53cd-4ce7-ba80-a523469b645d ("storage-provisioner_kube-system(192a4c32-53cd-4ce7-ba80-a523469b645d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(192a4c32-53cd-4ce7-ba80-a523469b645d)"
	W0725 19:34:20.226773  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:26 old-k8s-version-262689 kubelet[657]: E0725 19:29:26.224127     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:20.227465  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:29 old-k8s-version-262689 kubelet[657]: E0725 19:29:29.440381     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.227957  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:36 old-k8s-version-262689 kubelet[657]: E0725 19:29:36.271666     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.228162  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:37 old-k8s-version-262689 kubelet[657]: E0725 19:29:37.215365     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.228549  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:51 old-k8s-version-262689 kubelet[657]: E0725 19:29:51.215710     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.229042  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:51 old-k8s-version-262689 kubelet[657]: E0725 19:29:51.524343     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.229493  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:56 old-k8s-version-262689 kubelet[657]: E0725 19:29:56.271047     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.229718  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:04 old-k8s-version-262689 kubelet[657]: E0725 19:30:04.215943     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.230164  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:11 old-k8s-version-262689 kubelet[657]: E0725 19:30:11.215709     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.232845  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:19 old-k8s-version-262689 kubelet[657]: E0725 19:30:19.231937     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:20.233247  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:25 old-k8s-version-262689 kubelet[657]: E0725 19:30:25.214643     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.233529  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:32 old-k8s-version-262689 kubelet[657]: E0725 19:30:32.216336     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.234139  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:40 old-k8s-version-262689 kubelet[657]: E0725 19:30:40.656202     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.234363  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:46 old-k8s-version-262689 kubelet[657]: E0725 19:30:46.215221     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.234728  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:46 old-k8s-version-262689 kubelet[657]: E0725 19:30:46.271828     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.235078  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:01 old-k8s-version-262689 kubelet[657]: E0725 19:31:01.215458     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.235295  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:01 old-k8s-version-262689 kubelet[657]: E0725 19:31:01.215611     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.235532  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:13 old-k8s-version-262689 kubelet[657]: E0725 19:31:13.215172     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.235906  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:14 old-k8s-version-262689 kubelet[657]: E0725 19:31:14.214851     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.236251  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:26 old-k8s-version-262689 kubelet[657]: E0725 19:31:26.215358     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.236454  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:27 old-k8s-version-262689 kubelet[657]: E0725 19:31:27.215041     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.236868  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:37 old-k8s-version-262689 kubelet[657]: E0725 19:31:37.215248     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.240631  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:40 old-k8s-version-262689 kubelet[657]: E0725 19:31:40.223791     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:20.241195  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:52 old-k8s-version-262689 kubelet[657]: E0725 19:31:52.217037     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.241446  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:54 old-k8s-version-262689 kubelet[657]: E0725 19:31:54.225403     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.241652  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:05 old-k8s-version-262689 kubelet[657]: E0725 19:32:05.215333     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.242309  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:07 old-k8s-version-262689 kubelet[657]: E0725 19:32:07.888105     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.242659  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:16 old-k8s-version-262689 kubelet[657]: E0725 19:32:16.271463     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.242899  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:19 old-k8s-version-262689 kubelet[657]: E0725 19:32:19.215018     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.243284  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:28 old-k8s-version-262689 kubelet[657]: E0725 19:32:28.214662     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.243635  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:31 old-k8s-version-262689 kubelet[657]: E0725 19:32:31.215144     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.244269  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:41 old-k8s-version-262689 kubelet[657]: E0725 19:32:41.214882     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.244545  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:46 old-k8s-version-262689 kubelet[657]: E0725 19:32:46.215198     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.244932  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:52 old-k8s-version-262689 kubelet[657]: E0725 19:32:52.215208     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.245138  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:59 old-k8s-version-262689 kubelet[657]: E0725 19:32:59.215146     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.245498  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:06 old-k8s-version-262689 kubelet[657]: E0725 19:33:06.214773     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.245703  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:12 old-k8s-version-262689 kubelet[657]: E0725 19:33:12.219911     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.246101  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:17 old-k8s-version-262689 kubelet[657]: E0725 19:33:17.215219     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.246327  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:25 old-k8s-version-262689 kubelet[657]: E0725 19:33:25.215021     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.246766  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:32 old-k8s-version-262689 kubelet[657]: E0725 19:33:32.216211     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.247012  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:38 old-k8s-version-262689 kubelet[657]: E0725 19:33:38.215054     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.247382  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:45 old-k8s-version-262689 kubelet[657]: E0725 19:33:45.216141     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.247585  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:50 old-k8s-version-262689 kubelet[657]: E0725 19:33:50.215161     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.247936  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:58 old-k8s-version-262689 kubelet[657]: E0725 19:33:58.214660     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.248172  688921 logs.go:138] Found kubelet problem: Jul 25 19:34:03 old-k8s-version-262689 kubelet[657]: E0725 19:34:03.215098     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.248568  688921 logs.go:138] Found kubelet problem: Jul 25 19:34:11 old-k8s-version-262689 kubelet[657]: E0725 19:34:11.214739     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.248773  688921 logs.go:138] Found kubelet problem: Jul 25 19:34:14 old-k8s-version-262689 kubelet[657]: E0725 19:34:14.215104     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0725 19:34:20.248804  688921 logs.go:123] Gathering logs for describe nodes ...
	I0725 19:34:20.248838  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 19:34:20.394413  688921 logs.go:123] Gathering logs for kube-apiserver [8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7] ...
	I0725 19:34:20.394445  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7"
	I0725 19:34:20.454291  688921 logs.go:123] Gathering logs for kube-apiserver [a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8] ...
	I0725 19:34:20.454325  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8"
	I0725 19:34:20.507516  688921 logs.go:123] Gathering logs for coredns [6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8] ...
	I0725 19:34:20.507595  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8"
	I0725 19:34:20.549222  688921 logs.go:123] Gathering logs for kube-scheduler [fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1] ...
	I0725 19:34:20.549250  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1"
	I0725 19:34:20.600314  688921 logs.go:123] Gathering logs for etcd [bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6] ...
	I0725 19:34:20.600352  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6"
	I0725 19:34:20.657965  688921 logs.go:123] Gathering logs for kube-proxy [5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f] ...
	I0725 19:34:20.657998  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f"
	I0725 19:34:20.702882  688921 logs.go:123] Gathering logs for kube-controller-manager [092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb] ...
	I0725 19:34:20.702913  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb"
	I0725 19:34:20.766370  688921 logs.go:123] Gathering logs for dmesg ...
	I0725 19:34:20.766446  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 19:34:20.794437  688921 logs.go:123] Gathering logs for coredns [aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05] ...
	I0725 19:34:20.794537  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05"
	I0725 19:34:20.838887  688921 logs.go:123] Gathering logs for kube-scheduler [2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34] ...
	I0725 19:34:20.839007  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34"
	I0725 19:34:20.898415  688921 logs.go:123] Gathering logs for storage-provisioner [6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985] ...
	I0725 19:34:20.898447  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985"
	I0725 19:34:20.947486  688921 out.go:304] Setting ErrFile to fd 2...
	I0725 19:34:20.947512  688921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0725 19:34:20.947573  688921 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0725 19:34:20.947582  688921 out.go:239]   Jul 25 19:33:50 old-k8s-version-262689 kubelet[657]: E0725 19:33:50.215161     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 25 19:33:50 old-k8s-version-262689 kubelet[657]: E0725 19:33:50.215161     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.947592  688921 out.go:239]   Jul 25 19:33:58 old-k8s-version-262689 kubelet[657]: E0725 19:33:58.214660     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	  Jul 25 19:33:58 old-k8s-version-262689 kubelet[657]: E0725 19:33:58.214660     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.947606  688921 out.go:239]   Jul 25 19:34:03 old-k8s-version-262689 kubelet[657]: E0725 19:34:03.215098     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 25 19:34:03 old-k8s-version-262689 kubelet[657]: E0725 19:34:03.215098     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.947612  688921 out.go:239]   Jul 25 19:34:11 old-k8s-version-262689 kubelet[657]: E0725 19:34:11.214739     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	  Jul 25 19:34:11 old-k8s-version-262689 kubelet[657]: E0725 19:34:11.214739     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.947623  688921 out.go:239]   Jul 25 19:34:14 old-k8s-version-262689 kubelet[657]: E0725 19:34:14.215104     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 25 19:34:14 old-k8s-version-262689 kubelet[657]: E0725 19:34:14.215104     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0725 19:34:20.947630  688921 out.go:304] Setting ErrFile to fd 2...
	I0725 19:34:20.947635  688921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:34:30.948693  688921 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0725 19:34:30.959121  688921 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0725 19:34:30.961497  688921 out.go:177] 
	W0725 19:34:30.963511  688921 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0725 19:34:30.963612  688921 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0725 19:34:30.963654  688921 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0725 19:34:30.963661  688921 out.go:239] * 
	* 
	W0725 19:34:30.964606  688921 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 19:34:30.966662  688921 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-262689 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-262689
helpers_test.go:235: (dbg) docker inspect old-k8s-version-262689:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ad3e5b171b2995bead1aa162fead29fa218841c43c7ac6e27fd72de330c3b7b7",
	        "Created": "2024-07-25T19:25:36.611162491Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 689127,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-25T19:28:17.34778438Z",
	            "FinishedAt": "2024-07-25T19:28:16.254703516Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/ad3e5b171b2995bead1aa162fead29fa218841c43c7ac6e27fd72de330c3b7b7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ad3e5b171b2995bead1aa162fead29fa218841c43c7ac6e27fd72de330c3b7b7/hostname",
	        "HostsPath": "/var/lib/docker/containers/ad3e5b171b2995bead1aa162fead29fa218841c43c7ac6e27fd72de330c3b7b7/hosts",
	        "LogPath": "/var/lib/docker/containers/ad3e5b171b2995bead1aa162fead29fa218841c43c7ac6e27fd72de330c3b7b7/ad3e5b171b2995bead1aa162fead29fa218841c43c7ac6e27fd72de330c3b7b7-json.log",
	        "Name": "/old-k8s-version-262689",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-262689:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-262689",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/418c0882e2e9198d8e5e6de7f15076ddcd713d0702378867f7f17ee3d3e4bd14-init/diff:/var/lib/docker/overlay2/2f35ea3391cd80b943121d1a194672f5d1b43fa71caefe855446e579999be65e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/418c0882e2e9198d8e5e6de7f15076ddcd713d0702378867f7f17ee3d3e4bd14/merged",
	                "UpperDir": "/var/lib/docker/overlay2/418c0882e2e9198d8e5e6de7f15076ddcd713d0702378867f7f17ee3d3e4bd14/diff",
	                "WorkDir": "/var/lib/docker/overlay2/418c0882e2e9198d8e5e6de7f15076ddcd713d0702378867f7f17ee3d3e4bd14/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-262689",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-262689/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-262689",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-262689",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-262689",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d7e1c6dc23f4ac0332976b6b8cb4b01481c4a77bc34a0164b3903a33b560f775",
	            "SandboxKey": "/var/run/docker/netns/d7e1c6dc23f4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-262689": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "54e9c52dec53efbe8fef6070e5abd7d8dc7bb9d79eeee2dd4b697ef53bd44da4",
	                    "EndpointID": "eee26b7868c125af8799690bb326cd4a1231d171ed3292e9e222cff3ed6121c9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-262689",
	                        "ad3e5b171b29"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-262689 -n old-k8s-version-262689
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-262689 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-262689 logs -n 25: (2.253920501s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-212266 sudo                                  | bridge-212266          | jenkins | v1.33.1 | 25 Jul 24 19:26 UTC | 25 Jul 24 19:26 UTC |
	|         | systemctl status containerd                            |                        |         |         |                     |                     |
	|         | --all --full --no-pager                                |                        |         |         |                     |                     |
	| ssh     | -p bridge-212266 sudo                                  | bridge-212266          | jenkins | v1.33.1 | 25 Jul 24 19:26 UTC | 25 Jul 24 19:26 UTC |
	|         | systemctl cat containerd                               |                        |         |         |                     |                     |
	|         | --no-pager                                             |                        |         |         |                     |                     |
	| ssh     | -p bridge-212266 sudo cat                              | bridge-212266          | jenkins | v1.33.1 | 25 Jul 24 19:26 UTC | 25 Jul 24 19:26 UTC |
	|         | /lib/systemd/system/containerd.service                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-212266 sudo cat                              | bridge-212266          | jenkins | v1.33.1 | 25 Jul 24 19:26 UTC | 25 Jul 24 19:26 UTC |
	|         | /etc/containerd/config.toml                            |                        |         |         |                     |                     |
	| ssh     | -p bridge-212266 sudo                                  | bridge-212266          | jenkins | v1.33.1 | 25 Jul 24 19:26 UTC | 25 Jul 24 19:26 UTC |
	|         | containerd config dump                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-212266 sudo                                  | bridge-212266          | jenkins | v1.33.1 | 25 Jul 24 19:26 UTC |                     |
	|         | systemctl status crio --all                            |                        |         |         |                     |                     |
	|         | --full --no-pager                                      |                        |         |         |                     |                     |
	| ssh     | -p bridge-212266 sudo                                  | bridge-212266          | jenkins | v1.33.1 | 25 Jul 24 19:26 UTC | 25 Jul 24 19:26 UTC |
	|         | systemctl cat crio --no-pager                          |                        |         |         |                     |                     |
	| ssh     | -p bridge-212266 sudo find                             | bridge-212266          | jenkins | v1.33.1 | 25 Jul 24 19:26 UTC | 25 Jul 24 19:26 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                        |         |         |                     |                     |
	| ssh     | -p bridge-212266 sudo crio                             | bridge-212266          | jenkins | v1.33.1 | 25 Jul 24 19:26 UTC | 25 Jul 24 19:26 UTC |
	|         | config                                                 |                        |         |         |                     |                     |
	| delete  | -p bridge-212266                                       | bridge-212266          | jenkins | v1.33.1 | 25 Jul 24 19:26 UTC | 25 Jul 24 19:26 UTC |
	| start   | -p no-preload-143817 --memory=2200                     | no-preload-143817      | jenkins | v1.33.1 | 25 Jul 24 19:26 UTC | 25 Jul 24 19:28 UTC |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=docker                        |                        |         |         |                     |                     |
	|         |  --container-runtime=containerd                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-262689        | old-k8s-version-262689 | jenkins | v1.33.1 | 25 Jul 24 19:28 UTC | 25 Jul 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-262689                              | old-k8s-version-262689 | jenkins | v1.33.1 | 25 Jul 24 19:28 UTC | 25 Jul 24 19:28 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-262689             | old-k8s-version-262689 | jenkins | v1.33.1 | 25 Jul 24 19:28 UTC | 25 Jul 24 19:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-262689                              | old-k8s-version-262689 | jenkins | v1.33.1 | 25 Jul 24 19:28 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-143817             | no-preload-143817      | jenkins | v1.33.1 | 25 Jul 24 19:28 UTC | 25 Jul 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-143817                                   | no-preload-143817      | jenkins | v1.33.1 | 25 Jul 24 19:28 UTC | 25 Jul 24 19:28 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-143817                  | no-preload-143817      | jenkins | v1.33.1 | 25 Jul 24 19:28 UTC | 25 Jul 24 19:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-143817 --memory=2200                     | no-preload-143817      | jenkins | v1.33.1 | 25 Jul 24 19:28 UTC | 25 Jul 24 19:33 UTC |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --preload=false --driver=docker                        |                        |         |         |                     |                     |
	|         |  --container-runtime=containerd                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                        |         |         |                     |                     |
	| image   | no-preload-143817 image list                           | no-preload-143817      | jenkins | v1.33.1 | 25 Jul 24 19:33 UTC | 25 Jul 24 19:33 UTC |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p no-preload-143817                                   | no-preload-143817      | jenkins | v1.33.1 | 25 Jul 24 19:33 UTC | 25 Jul 24 19:33 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-143817                                   | no-preload-143817      | jenkins | v1.33.1 | 25 Jul 24 19:33 UTC | 25 Jul 24 19:33 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-143817                                   | no-preload-143817      | jenkins | v1.33.1 | 25 Jul 24 19:33 UTC | 25 Jul 24 19:33 UTC |
	| delete  | -p no-preload-143817                                   | no-preload-143817      | jenkins | v1.33.1 | 25 Jul 24 19:33 UTC | 25 Jul 24 19:33 UTC |
	| start   | -p embed-certs-240166                                  | embed-certs-240166     | jenkins | v1.33.1 | 25 Jul 24 19:33 UTC | 25 Jul 24 19:34 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                        |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 19:33:21
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 19:33:21.286087  698181 out.go:291] Setting OutFile to fd 1 ...
	I0725 19:33:21.286271  698181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:33:21.286285  698181 out.go:304] Setting ErrFile to fd 2...
	I0725 19:33:21.286292  698181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:33:21.286571  698181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	I0725 19:33:21.287054  698181 out.go:298] Setting JSON to false
	I0725 19:33:21.288186  698181 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11750,"bootTime":1721924251,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0725 19:33:21.288258  698181 start.go:139] virtualization:  
	I0725 19:33:21.290661  698181 out.go:177] * [embed-certs-240166] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0725 19:33:21.292892  698181 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 19:33:21.293014  698181 notify.go:220] Checking for updates...
	I0725 19:33:21.296003  698181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 19:33:21.297804  698181 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 19:33:21.299665  698181 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	I0725 19:33:21.301320  698181 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0725 19:33:21.303031  698181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 19:33:21.305404  698181 config.go:182] Loaded profile config "old-k8s-version-262689": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0725 19:33:21.305965  698181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 19:33:21.326604  698181 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0725 19:33:21.326835  698181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 19:33:21.392667  698181 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-25 19:33:21.378968035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 19:33:21.392783  698181 docker.go:307] overlay module found
	I0725 19:33:21.394788  698181 out.go:177] * Using the docker driver based on user configuration
	I0725 19:33:21.396340  698181 start.go:297] selected driver: docker
	I0725 19:33:21.396357  698181 start.go:901] validating driver "docker" against <nil>
	I0725 19:33:21.396372  698181 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 19:33:21.397068  698181 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 19:33:21.449340  698181 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-25 19:33:21.439211779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 19:33:21.449520  698181 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 19:33:21.449756  698181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 19:33:21.451573  698181 out.go:177] * Using Docker driver with root privileges
	I0725 19:33:21.453299  698181 cni.go:84] Creating CNI manager for ""
	I0725 19:33:21.453318  698181 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0725 19:33:21.453330  698181 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0725 19:33:21.453440  698181 start.go:340] cluster config:
	{Name:embed-certs-240166 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-240166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:33:21.455461  698181 out.go:177] * Starting "embed-certs-240166" primary control-plane node in "embed-certs-240166" cluster
	I0725 19:33:21.457046  698181 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0725 19:33:21.458776  698181 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0725 19:33:21.460583  698181 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0725 19:33:21.460645  698181 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	I0725 19:33:21.460666  698181 cache.go:56] Caching tarball of preloaded images
	I0725 19:33:21.460678  698181 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0725 19:33:21.460746  698181 preload.go:172] Found /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0725 19:33:21.460756  698181 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on containerd
	I0725 19:33:21.460855  698181 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/config.json ...
	I0725 19:33:21.460905  698181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/config.json: {Name:mka6ae37130518f5717245e655531644dd928580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0725 19:33:21.480030  698181 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0725 19:33:21.480052  698181 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0725 19:33:21.480139  698181 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0725 19:33:21.480163  698181 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0725 19:33:21.480173  698181 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0725 19:33:21.480181  698181 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0725 19:33:21.480190  698181 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0725 19:33:21.603025  698181 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0725 19:33:21.603068  698181 cache.go:194] Successfully downloaded all kic artifacts
	I0725 19:33:21.603104  698181 start.go:360] acquireMachinesLock for embed-certs-240166: {Name:mk12ce5baa5d6e39306cbb300f74fcabccebd6da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0725 19:33:21.604201  698181 start.go:364] duration metric: took 1.070474ms to acquireMachinesLock for "embed-certs-240166"
	I0725 19:33:21.604247  698181 start.go:93] Provisioning new machine with config: &{Name:embed-certs-240166 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-240166 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0725 19:33:21.604341  698181 start.go:125] createHost starting for "" (driver="docker")
	I0725 19:33:17.237479  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:19.735300  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:21.736670  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:21.606517  698181 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0725 19:33:21.606772  698181 start.go:159] libmachine.API.Create for "embed-certs-240166" (driver="docker")
	I0725 19:33:21.606824  698181 client.go:168] LocalClient.Create starting
	I0725 19:33:21.606890  698181 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem
	I0725 19:33:21.606927  698181 main.go:141] libmachine: Decoding PEM data...
	I0725 19:33:21.606996  698181 main.go:141] libmachine: Parsing certificate...
	I0725 19:33:21.607068  698181 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19326-431487/.minikube/certs/cert.pem
	I0725 19:33:21.607095  698181 main.go:141] libmachine: Decoding PEM data...
	I0725 19:33:21.607113  698181 main.go:141] libmachine: Parsing certificate...
	I0725 19:33:21.607496  698181 cli_runner.go:164] Run: docker network inspect embed-certs-240166 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0725 19:33:21.623331  698181 cli_runner.go:211] docker network inspect embed-certs-240166 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0725 19:33:21.623413  698181 network_create.go:284] running [docker network inspect embed-certs-240166] to gather additional debugging logs...
	I0725 19:33:21.623431  698181 cli_runner.go:164] Run: docker network inspect embed-certs-240166
	W0725 19:33:21.638402  698181 cli_runner.go:211] docker network inspect embed-certs-240166 returned with exit code 1
	I0725 19:33:21.638438  698181 network_create.go:287] error running [docker network inspect embed-certs-240166]: docker network inspect embed-certs-240166: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-240166 not found
	I0725 19:33:21.638453  698181 network_create.go:289] output of [docker network inspect embed-certs-240166]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-240166 not found
	
	** /stderr **
	I0725 19:33:21.638569  698181 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 19:33:21.661771  698181 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a821fe35c4f7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:35:60:6d:c3} reservation:<nil>}
	I0725 19:33:21.662251  698181 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-f7c6a289108c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:5a:53:02:1b} reservation:<nil>}
	I0725 19:33:21.662628  698181 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8b48f7ba5d2a IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:4c:9c:01:cd} reservation:<nil>}
	I0725 19:33:21.663216  698181 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001842fd0}
	I0725 19:33:21.663246  698181 network_create.go:124] attempt to create docker network embed-certs-240166 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0725 19:33:21.663305  698181 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-240166 embed-certs-240166
	I0725 19:33:21.757746  698181 network_create.go:108] docker network embed-certs-240166 192.168.76.0/24 created
	I0725 19:33:21.757783  698181 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-240166" container
	I0725 19:33:21.757858  698181 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0725 19:33:21.787127  698181 cli_runner.go:164] Run: docker volume create embed-certs-240166 --label name.minikube.sigs.k8s.io=embed-certs-240166 --label created_by.minikube.sigs.k8s.io=true
	I0725 19:33:21.803931  698181 oci.go:103] Successfully created a docker volume embed-certs-240166
	I0725 19:33:21.804026  698181 cli_runner.go:164] Run: docker run --rm --name embed-certs-240166-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-240166 --entrypoint /usr/bin/test -v embed-certs-240166:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0725 19:33:22.468647  698181 oci.go:107] Successfully prepared a docker volume embed-certs-240166
	I0725 19:33:22.468701  698181 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0725 19:33:22.468722  698181 kic.go:194] Starting extracting preloaded images to volume ...
	I0725 19:33:22.468804  698181 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-240166:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0725 19:33:24.269025  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:26.736978  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:27.306852  698181 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-240166:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir: (4.837993165s)
	I0725 19:33:27.306884  698181 kic.go:203] duration metric: took 4.838157797s to extract preloaded images to volume ...
	W0725 19:33:27.307078  698181 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0725 19:33:27.307199  698181 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0725 19:33:27.360658  698181 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-240166 --name embed-certs-240166 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-240166 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-240166 --network embed-certs-240166 --ip 192.168.76.2 --volume embed-certs-240166:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7
	I0725 19:33:27.689655  698181 cli_runner.go:164] Run: docker container inspect embed-certs-240166 --format={{.State.Running}}
	I0725 19:33:27.718081  698181 cli_runner.go:164] Run: docker container inspect embed-certs-240166 --format={{.State.Status}}
	I0725 19:33:27.745269  698181 cli_runner.go:164] Run: docker exec embed-certs-240166 stat /var/lib/dpkg/alternatives/iptables
	I0725 19:33:27.804169  698181 oci.go:144] the created container "embed-certs-240166" has a running status.
	I0725 19:33:27.804195  698181 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19326-431487/.minikube/machines/embed-certs-240166/id_rsa...
	I0725 19:33:28.366584  698181 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19326-431487/.minikube/machines/embed-certs-240166/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0725 19:33:28.398525  698181 cli_runner.go:164] Run: docker container inspect embed-certs-240166 --format={{.State.Status}}
	I0725 19:33:28.428736  698181 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0725 19:33:28.428768  698181 kic_runner.go:114] Args: [docker exec --privileged embed-certs-240166 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0725 19:33:28.502372  698181 cli_runner.go:164] Run: docker container inspect embed-certs-240166 --format={{.State.Status}}
	I0725 19:33:28.539007  698181 machine.go:94] provisionDockerMachine start ...
	I0725 19:33:28.539107  698181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-240166
	I0725 19:33:28.564018  698181 main.go:141] libmachine: Using SSH client type: native
	I0725 19:33:28.564290  698181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0725 19:33:28.564348  698181 main.go:141] libmachine: About to run SSH command:
	hostname
	I0725 19:33:28.740176  698181 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-240166
	
	I0725 19:33:28.740206  698181 ubuntu.go:169] provisioning hostname "embed-certs-240166"
	I0725 19:33:28.740572  698181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-240166
	I0725 19:33:28.780462  698181 main.go:141] libmachine: Using SSH client type: native
	I0725 19:33:28.780712  698181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0725 19:33:28.780731  698181 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-240166 && echo "embed-certs-240166" | sudo tee /etc/hostname
	I0725 19:33:28.949018  698181 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-240166
	
	I0725 19:33:28.949159  698181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-240166
	I0725 19:33:28.968513  698181 main.go:141] libmachine: Using SSH client type: native
	I0725 19:33:28.968767  698181 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 33506 <nil> <nil>}
	I0725 19:33:28.968789  698181 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-240166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-240166/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-240166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0725 19:33:29.110865  698181 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0725 19:33:29.110894  698181 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19326-431487/.minikube CaCertPath:/home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19326-431487/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19326-431487/.minikube}
	I0725 19:33:29.111028  698181 ubuntu.go:177] setting up certificates
	I0725 19:33:29.111043  698181 provision.go:84] configureAuth start
	I0725 19:33:29.111142  698181 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-240166
	I0725 19:33:29.143811  698181 provision.go:143] copyHostCerts
	I0725 19:33:29.143877  698181 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-431487/.minikube/ca.pem, removing ...
	I0725 19:33:29.143891  698181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-431487/.minikube/ca.pem
	I0725 19:33:29.143964  698181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19326-431487/.minikube/ca.pem (1082 bytes)
	I0725 19:33:29.144060  698181 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-431487/.minikube/cert.pem, removing ...
	I0725 19:33:29.144070  698181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-431487/.minikube/cert.pem
	I0725 19:33:29.144095  698181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19326-431487/.minikube/cert.pem (1123 bytes)
	I0725 19:33:29.144150  698181 exec_runner.go:144] found /home/jenkins/minikube-integration/19326-431487/.minikube/key.pem, removing ...
	I0725 19:33:29.144159  698181 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19326-431487/.minikube/key.pem
	I0725 19:33:29.144183  698181 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19326-431487/.minikube/key.pem (1679 bytes)
	I0725 19:33:29.144245  698181 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19326-431487/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca-key.pem org=jenkins.embed-certs-240166 san=[127.0.0.1 192.168.76.2 embed-certs-240166 localhost minikube]
	I0725 19:33:29.506793  698181 provision.go:177] copyRemoteCerts
	I0725 19:33:29.506888  698181 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0725 19:33:29.506952  698181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-240166
	I0725 19:33:29.523875  698181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/embed-certs-240166/id_rsa Username:docker}
	I0725 19:33:29.622986  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0725 19:33:29.651091  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0725 19:33:29.677554  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0725 19:33:29.702991  698181 provision.go:87] duration metric: took 591.93031ms to configureAuth
	I0725 19:33:29.703016  698181 ubuntu.go:193] setting minikube options for container-runtime
	I0725 19:33:29.703195  698181 config.go:182] Loaded profile config "embed-certs-240166": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0725 19:33:29.703248  698181 machine.go:97] duration metric: took 1.164200278s to provisionDockerMachine
	I0725 19:33:29.703261  698181 client.go:171] duration metric: took 8.096428246s to LocalClient.Create
	I0725 19:33:29.703276  698181 start.go:167] duration metric: took 8.096508835s to libmachine.API.Create "embed-certs-240166"
	I0725 19:33:29.703283  698181 start.go:293] postStartSetup for "embed-certs-240166" (driver="docker")
	I0725 19:33:29.703292  698181 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0725 19:33:29.703343  698181 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0725 19:33:29.703382  698181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-240166
	I0725 19:33:29.720635  698181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/embed-certs-240166/id_rsa Username:docker}
	I0725 19:33:29.817601  698181 ssh_runner.go:195] Run: cat /etc/os-release
	I0725 19:33:29.821712  698181 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0725 19:33:29.821748  698181 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0725 19:33:29.821759  698181 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0725 19:33:29.821766  698181 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0725 19:33:29.821776  698181 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-431487/.minikube/addons for local assets ...
	I0725 19:33:29.821838  698181 filesync.go:126] Scanning /home/jenkins/minikube-integration/19326-431487/.minikube/files for local assets ...
	I0725 19:33:29.821927  698181 filesync.go:149] local asset: /home/jenkins/minikube-integration/19326-431487/.minikube/files/etc/ssl/certs/4368932.pem -> 4368932.pem in /etc/ssl/certs
	I0725 19:33:29.822035  698181 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0725 19:33:29.831050  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/files/etc/ssl/certs/4368932.pem --> /etc/ssl/certs/4368932.pem (1708 bytes)
	I0725 19:33:29.856799  698181 start.go:296] duration metric: took 153.501464ms for postStartSetup
	I0725 19:33:29.857188  698181 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-240166
	I0725 19:33:29.874370  698181 profile.go:143] Saving config to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/config.json ...
	I0725 19:33:29.874833  698181 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 19:33:29.874888  698181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-240166
	I0725 19:33:29.891612  698181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/embed-certs-240166/id_rsa Username:docker}
	I0725 19:33:29.987959  698181 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0725 19:33:29.993193  698181 start.go:128] duration metric: took 8.388834764s to createHost
	I0725 19:33:29.993215  698181 start.go:83] releasing machines lock for "embed-certs-240166", held for 8.388994629s
	I0725 19:33:29.993292  698181 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-240166
	I0725 19:33:30.041150  698181 ssh_runner.go:195] Run: cat /version.json
	I0725 19:33:30.041216  698181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-240166
	I0725 19:33:30.041498  698181 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0725 19:33:30.041566  698181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-240166
	I0725 19:33:30.066990  698181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/embed-certs-240166/id_rsa Username:docker}
	I0725 19:33:30.074094  698181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/embed-certs-240166/id_rsa Username:docker}
	I0725 19:33:30.171464  698181 ssh_runner.go:195] Run: systemctl --version
	I0725 19:33:30.310923  698181 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0725 19:33:30.315424  698181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0725 19:33:30.343063  698181 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0725 19:33:30.343257  698181 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0725 19:33:30.373709  698181 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0725 19:33:30.373735  698181 start.go:495] detecting cgroup driver to use...
	I0725 19:33:30.373768  698181 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0725 19:33:30.373827  698181 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0725 19:33:30.388124  698181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0725 19:33:30.400545  698181 docker.go:217] disabling cri-docker service (if available) ...
	I0725 19:33:30.400611  698181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0725 19:33:30.414450  698181 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0725 19:33:30.429800  698181 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0725 19:33:30.515245  698181 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0725 19:33:30.612730  698181 docker.go:233] disabling docker service ...
	I0725 19:33:30.612804  698181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0725 19:33:30.636014  698181 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0725 19:33:30.648280  698181 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0725 19:33:30.743686  698181 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0725 19:33:30.840320  698181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0725 19:33:30.853028  698181 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0725 19:33:30.870703  698181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0725 19:33:30.881325  698181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0725 19:33:30.892077  698181 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0725 19:33:30.892167  698181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0725 19:33:30.902689  698181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 19:33:30.912883  698181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0725 19:33:30.923509  698181 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0725 19:33:30.933999  698181 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0725 19:33:30.943971  698181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0725 19:33:30.953964  698181 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0725 19:33:30.964381  698181 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0725 19:33:30.976608  698181 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0725 19:33:30.988735  698181 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0725 19:33:30.998487  698181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:33:31.093478  698181 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0725 19:33:31.245256  698181 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0725 19:33:31.245359  698181 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0725 19:33:31.249151  698181 start.go:563] Will wait 60s for crictl version
	I0725 19:33:31.249245  698181 ssh_runner.go:195] Run: which crictl
	I0725 19:33:31.253138  698181 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0725 19:33:31.301013  698181 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0725 19:33:31.301145  698181 ssh_runner.go:195] Run: containerd --version
	I0725 19:33:31.325398  698181 ssh_runner.go:195] Run: containerd --version
	I0725 19:33:31.352075  698181 out.go:177] * Preparing Kubernetes v1.30.3 on containerd 1.7.19 ...
	I0725 19:33:29.235104  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:31.235396  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:31.353780  698181 cli_runner.go:164] Run: docker network inspect embed-certs-240166 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0725 19:33:31.370568  698181 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0725 19:33:31.374335  698181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:33:31.386395  698181 kubeadm.go:883] updating cluster {Name:embed-certs-240166 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-240166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0725 19:33:31.386522  698181 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0725 19:33:31.386587  698181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:33:31.429169  698181 containerd.go:627] all images are preloaded for containerd runtime.
	I0725 19:33:31.429194  698181 containerd.go:534] Images already preloaded, skipping extraction
	I0725 19:33:31.429258  698181 ssh_runner.go:195] Run: sudo crictl images --output json
	I0725 19:33:31.468583  698181 containerd.go:627] all images are preloaded for containerd runtime.
	I0725 19:33:31.468607  698181 cache_images.go:84] Images are preloaded, skipping loading
	I0725 19:33:31.468616  698181 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.30.3 containerd true true} ...
	I0725 19:33:31.468762  698181 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-240166 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-240166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0725 19:33:31.468840  698181 ssh_runner.go:195] Run: sudo crictl info
	I0725 19:33:31.508103  698181 cni.go:84] Creating CNI manager for ""
	I0725 19:33:31.508130  698181 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0725 19:33:31.508141  698181 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0725 19:33:31.508200  698181 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-240166 NodeName:embed-certs-240166 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0725 19:33:31.508369  698181 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-240166"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0725 19:33:31.508444  698181 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0725 19:33:31.517801  698181 binaries.go:44] Found k8s binaries, skipping transfer
	I0725 19:33:31.517878  698181 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0725 19:33:31.527377  698181 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0725 19:33:31.547255  698181 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0725 19:33:31.568491  698181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0725 19:33:31.588345  698181 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0725 19:33:31.591995  698181 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0725 19:33:31.603812  698181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:33:31.689689  698181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:33:31.704202  698181 certs.go:68] Setting up /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166 for IP: 192.168.76.2
	I0725 19:33:31.704226  698181 certs.go:194] generating shared ca certs ...
	I0725 19:33:31.704244  698181 certs.go:226] acquiring lock for ca certs: {Name:mk41d7b1e7cb52699a093c81e00768f54d73ad8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:33:31.704417  698181 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19326-431487/.minikube/ca.key
	I0725 19:33:31.704485  698181 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.key
	I0725 19:33:31.704507  698181 certs.go:256] generating profile certs ...
	I0725 19:33:31.704581  698181 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/client.key
	I0725 19:33:31.704599  698181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/client.crt with IP's: []
	I0725 19:33:32.186631  698181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/client.crt ...
	I0725 19:33:32.186661  698181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/client.crt: {Name:mk9f8e7c4ff5a207e674f3df448020f54f39cd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:33:32.187468  698181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/client.key ...
	I0725 19:33:32.187492  698181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/client.key: {Name:mk3d9b077ab20a79d3672d18bcd04c1fd79de06d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:33:32.188238  698181 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/apiserver.key.db275045
	I0725 19:33:32.188264  698181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/apiserver.crt.db275045 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0725 19:33:33.162876  698181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/apiserver.crt.db275045 ...
	I0725 19:33:33.162909  698181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/apiserver.crt.db275045: {Name:mkca91b3261b2785acdfcb08b17303cf0cb4bd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:33:33.163685  698181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/apiserver.key.db275045 ...
	I0725 19:33:33.163704  698181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/apiserver.key.db275045: {Name:mk666de744508f02eb76a0e47b56c4ca8edd2188 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:33:33.163808  698181 certs.go:381] copying /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/apiserver.crt.db275045 -> /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/apiserver.crt
	I0725 19:33:33.163893  698181 certs.go:385] copying /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/apiserver.key.db275045 -> /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/apiserver.key
	I0725 19:33:33.163962  698181 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/proxy-client.key
	I0725 19:33:33.163981  698181 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/proxy-client.crt with IP's: []
	I0725 19:33:33.398078  698181 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/proxy-client.crt ...
	I0725 19:33:33.398112  698181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/proxy-client.crt: {Name:mk5bc0fcba79e4269e2201872e15f982db0851bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:33:33.398323  698181 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/proxy-client.key ...
	I0725 19:33:33.398341  698181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/proxy-client.key: {Name:mk61f4809552f61ef52620008e2c1e388c8c971a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:33:33.398536  698181 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/436893.pem (1338 bytes)
	W0725 19:33:33.398584  698181 certs.go:480] ignoring /home/jenkins/minikube-integration/19326-431487/.minikube/certs/436893_empty.pem, impossibly tiny 0 bytes
	I0725 19:33:33.398605  698181 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca-key.pem (1675 bytes)
	I0725 19:33:33.398633  698181 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/ca.pem (1082 bytes)
	I0725 19:33:33.398662  698181 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/cert.pem (1123 bytes)
	I0725 19:33:33.398690  698181 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/certs/key.pem (1679 bytes)
	I0725 19:33:33.398738  698181 certs.go:484] found cert: /home/jenkins/minikube-integration/19326-431487/.minikube/files/etc/ssl/certs/4368932.pem (1708 bytes)
	I0725 19:33:33.399446  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0725 19:33:33.424390  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0725 19:33:33.449128  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0725 19:33:33.477115  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0725 19:33:33.503421  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0725 19:33:33.530144  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0725 19:33:33.556098  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0725 19:33:33.583146  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/embed-certs-240166/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0725 19:33:33.610478  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/files/etc/ssl/certs/4368932.pem --> /usr/share/ca-certificates/4368932.pem (1708 bytes)
	I0725 19:33:33.637066  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0725 19:33:33.662610  698181 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19326-431487/.minikube/certs/436893.pem --> /usr/share/ca-certificates/436893.pem (1338 bytes)
	I0725 19:33:33.689664  698181 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0725 19:33:33.709484  698181 ssh_runner.go:195] Run: openssl version
	I0725 19:33:33.715426  698181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4368932.pem && ln -fs /usr/share/ca-certificates/4368932.pem /etc/ssl/certs/4368932.pem"
	I0725 19:33:33.725379  698181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4368932.pem
	I0725 19:33:33.729205  698181 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 25 18:40 /usr/share/ca-certificates/4368932.pem
	I0725 19:33:33.729311  698181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4368932.pem
	I0725 19:33:33.738249  698181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/4368932.pem /etc/ssl/certs/3ec20f2e.0"
	I0725 19:33:33.748340  698181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0725 19:33:33.758338  698181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:33:33.761876  698181 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 25 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:33:33.761971  698181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0725 19:33:33.769152  698181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0725 19:33:33.786534  698181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/436893.pem && ln -fs /usr/share/ca-certificates/436893.pem /etc/ssl/certs/436893.pem"
	I0725 19:33:33.798695  698181 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/436893.pem
	I0725 19:33:33.802587  698181 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 25 18:40 /usr/share/ca-certificates/436893.pem
	I0725 19:33:33.802701  698181 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/436893.pem
	I0725 19:33:33.810466  698181 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/436893.pem /etc/ssl/certs/51391683.0"
	I0725 19:33:33.824559  698181 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0725 19:33:33.828557  698181 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0725 19:33:33.828625  698181 kubeadm.go:392] StartCluster: {Name:embed-certs-240166 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-240166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 19:33:33.828713  698181 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0725 19:33:33.828774  698181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0725 19:33:33.870692  698181 cri.go:89] found id: ""
	I0725 19:33:33.870785  698181 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0725 19:33:33.880440  698181 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0725 19:33:33.890210  698181 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0725 19:33:33.890274  698181 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0725 19:33:33.900158  698181 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0725 19:33:33.900178  698181 kubeadm.go:157] found existing configuration files:
	
	I0725 19:33:33.900234  698181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0725 19:33:33.910104  698181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0725 19:33:33.910233  698181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0725 19:33:33.919375  698181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0725 19:33:33.928955  698181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0725 19:33:33.929052  698181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0725 19:33:33.937818  698181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0725 19:33:33.947153  698181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0725 19:33:33.947268  698181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0725 19:33:33.956017  698181 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0725 19:33:33.965557  698181 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0725 19:33:33.965644  698181 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0725 19:33:33.974526  698181 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0725 19:33:34.028995  698181 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0725 19:33:34.029254  698181 kubeadm.go:310] [preflight] Running pre-flight checks
	I0725 19:33:34.082611  698181 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0725 19:33:34.082762  698181 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1065-aws
	I0725 19:33:34.082827  698181 kubeadm.go:310] OS: Linux
	I0725 19:33:34.082897  698181 kubeadm.go:310] CGROUPS_CPU: enabled
	I0725 19:33:34.083025  698181 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0725 19:33:34.083108  698181 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0725 19:33:34.083187  698181 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0725 19:33:34.083265  698181 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0725 19:33:34.083353  698181 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0725 19:33:34.083425  698181 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0725 19:33:34.083505  698181 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0725 19:33:34.083582  698181 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0725 19:33:34.203513  698181 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0725 19:33:34.203667  698181 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0725 19:33:34.203790  698181 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0725 19:33:34.563358  698181 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0725 19:33:34.566714  698181 out.go:204]   - Generating certificates and keys ...
	I0725 19:33:34.566905  698181 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0725 19:33:34.567068  698181 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0725 19:33:35.047348  698181 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0725 19:33:35.222717  698181 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0725 19:33:35.533480  698181 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0725 19:33:35.855215  698181 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0725 19:33:36.276151  698181 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0725 19:33:36.276426  698181 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-240166 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0725 19:33:33.236475  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:35.236629  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:36.711778  698181 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0725 19:33:36.712146  698181 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-240166 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0725 19:33:38.412683  698181 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0725 19:33:38.598281  698181 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0725 19:33:39.122498  698181 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0725 19:33:39.122857  698181 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0725 19:33:39.649046  698181 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0725 19:33:39.974265  698181 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0725 19:33:40.786699  698181 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0725 19:33:37.237325  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:39.239015  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:41.737833  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:41.440370  698181 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0725 19:33:42.261264  698181 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0725 19:33:42.262035  698181 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0725 19:33:42.265797  698181 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0725 19:33:42.268366  698181 out.go:204]   - Booting up control plane ...
	I0725 19:33:42.268477  698181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0725 19:33:42.268561  698181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0725 19:33:42.270556  698181 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0725 19:33:42.286179  698181 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0725 19:33:42.287459  698181 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0725 19:33:42.287548  698181 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0725 19:33:42.396216  698181 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0725 19:33:42.396307  698181 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0725 19:33:44.396440  698181 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001397654s
	I0725 19:33:44.396528  698181 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0725 19:33:44.238271  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:46.737113  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:51.397991  698181 kubeadm.go:310] [api-check] The API server is healthy after 7.001867326s
	I0725 19:33:51.421760  698181 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0725 19:33:51.440506  698181 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0725 19:33:51.471059  698181 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0725 19:33:51.471259  698181 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-240166 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0725 19:33:51.485311  698181 kubeadm.go:310] [bootstrap-token] Using token: rhjkyi.ypg72ptr5gntjngs
	I0725 19:33:49.236415  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:51.736605  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:51.488104  698181 out.go:204]   - Configuring RBAC rules ...
	I0725 19:33:51.488236  698181 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0725 19:33:51.493394  698181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0725 19:33:51.502061  698181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0725 19:33:51.506835  698181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0725 19:33:51.511184  698181 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0725 19:33:51.517454  698181 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0725 19:33:51.805736  698181 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0725 19:33:52.237957  698181 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0725 19:33:52.805032  698181 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0725 19:33:52.806244  698181 kubeadm.go:310] 
	I0725 19:33:52.806318  698181 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0725 19:33:52.806325  698181 kubeadm.go:310] 
	I0725 19:33:52.806399  698181 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0725 19:33:52.806403  698181 kubeadm.go:310] 
	I0725 19:33:52.806428  698181 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0725 19:33:52.806493  698181 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0725 19:33:52.806545  698181 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0725 19:33:52.806552  698181 kubeadm.go:310] 
	I0725 19:33:52.806613  698181 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0725 19:33:52.806618  698181 kubeadm.go:310] 
	I0725 19:33:52.806664  698181 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0725 19:33:52.806669  698181 kubeadm.go:310] 
	I0725 19:33:52.806719  698181 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0725 19:33:52.806790  698181 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0725 19:33:52.806856  698181 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0725 19:33:52.806860  698181 kubeadm.go:310] 
	I0725 19:33:52.806974  698181 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0725 19:33:52.807050  698181 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0725 19:33:52.807055  698181 kubeadm.go:310] 
	I0725 19:33:52.807135  698181 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rhjkyi.ypg72ptr5gntjngs \
	I0725 19:33:52.807234  698181 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1d9d423c7b2bb24c3d3f38c79b211d8531f40310f79a51ed9602d13a57b81b8c \
	I0725 19:33:52.807254  698181 kubeadm.go:310] 	--control-plane 
	I0725 19:33:52.807259  698181 kubeadm.go:310] 
	I0725 19:33:52.807340  698181 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0725 19:33:52.807345  698181 kubeadm.go:310] 
	I0725 19:33:52.807423  698181 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rhjkyi.ypg72ptr5gntjngs \
	I0725 19:33:52.807522  698181 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1d9d423c7b2bb24c3d3f38c79b211d8531f40310f79a51ed9602d13a57b81b8c 
	I0725 19:33:52.811047  698181 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1065-aws\n", err: exit status 1
	I0725 19:33:52.811183  698181 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0725 19:33:52.811204  698181 cni.go:84] Creating CNI manager for ""
	I0725 19:33:52.811221  698181 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0725 19:33:52.814176  698181 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0725 19:33:52.816754  698181 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0725 19:33:52.821663  698181 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0725 19:33:52.821686  698181 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0725 19:33:52.841213  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0725 19:33:53.116641  698181 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0725 19:33:53.116796  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:53.116912  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-240166 minikube.k8s.io/updated_at=2024_07_25T19_33_53_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a minikube.k8s.io/name=embed-certs-240166 minikube.k8s.io/primary=true
	I0725 19:33:53.323222  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:53.323233  698181 ops.go:34] apiserver oom_adj: -16
	I0725 19:33:53.824048  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:54.323325  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:54.823565  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:55.323386  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:55.824196  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:54.236461  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:56.735944  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:33:56.323578  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:56.823306  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:57.323389  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:57.823867  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:58.323477  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:58.824357  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:59.324142  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:59.823401  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:34:00.325200  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:34:00.824053  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:33:59.235650  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:01.736227  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:01.324347  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:34:01.824176  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:34:02.324171  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:34:02.824048  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:34:03.324129  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:34:03.824184  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:34:04.323594  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:34:04.824151  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:34:05.324176  698181 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0725 19:34:05.434803  698181 kubeadm.go:1113] duration metric: took 12.318066159s to wait for elevateKubeSystemPrivileges
	I0725 19:34:05.434839  698181 kubeadm.go:394] duration metric: took 31.606218155s to StartCluster
	I0725 19:34:05.434857  698181 settings.go:142] acquiring lock: {Name:mk69edff96840eebb76289a50cf78daf601fe5de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:34:05.434923  698181 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 19:34:05.436318  698181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19326-431487/kubeconfig: {Name:mk3cdfe1101bbbc0f7441d92cff5cd6b29ee3404 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0725 19:34:05.436564  698181 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0725 19:34:05.436723  698181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0725 19:34:05.436984  698181 config.go:182] Loaded profile config "embed-certs-240166": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0725 19:34:05.437028  698181 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0725 19:34:05.437094  698181 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-240166"
	I0725 19:34:05.437116  698181 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-240166"
	I0725 19:34:05.437139  698181 host.go:66] Checking if "embed-certs-240166" exists ...
	I0725 19:34:05.437827  698181 cli_runner.go:164] Run: docker container inspect embed-certs-240166 --format={{.State.Status}}
	I0725 19:34:05.437896  698181 addons.go:69] Setting default-storageclass=true in profile "embed-certs-240166"
	I0725 19:34:05.437930  698181 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-240166"
	I0725 19:34:05.438154  698181 cli_runner.go:164] Run: docker container inspect embed-certs-240166 --format={{.State.Status}}
	I0725 19:34:05.440393  698181 out.go:177] * Verifying Kubernetes components...
	I0725 19:34:05.443237  698181 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0725 19:34:05.472212  698181 addons.go:234] Setting addon default-storageclass=true in "embed-certs-240166"
	I0725 19:34:05.472255  698181 host.go:66] Checking if "embed-certs-240166" exists ...
	I0725 19:34:05.472673  698181 cli_runner.go:164] Run: docker container inspect embed-certs-240166 --format={{.State.Status}}
	I0725 19:34:05.481196  698181 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0725 19:34:05.483953  698181 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:34:05.483972  698181 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0725 19:34:05.484040  698181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-240166
	I0725 19:34:05.516136  698181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/embed-certs-240166/id_rsa Username:docker}
	I0725 19:34:05.518612  698181 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0725 19:34:05.518634  698181 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0725 19:34:05.518705  698181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-240166
	I0725 19:34:05.540433  698181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33506 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/embed-certs-240166/id_rsa Username:docker}
	I0725 19:34:05.908291  698181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0725 19:34:05.931484  698181 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0725 19:34:05.947466  698181 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0725 19:34:05.947588  698181 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0725 19:34:03.736798  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:06.235979  688921 pod_ready.go:102] pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:06.737497  688921 pod_ready.go:81] duration metric: took 4m0.008287476s for pod "metrics-server-9975d5f86-8g9gp" in "kube-system" namespace to be "Ready" ...
	E0725 19:34:06.737526  688921 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0725 19:34:06.737535  688921 pod_ready.go:38] duration metric: took 5m24.79684369s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:34:06.737548  688921 api_server.go:52] waiting for apiserver process to appear ...
	I0725 19:34:06.737576  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0725 19:34:06.737650  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 19:34:06.956609  698181 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.025090111s)
	I0725 19:34:06.956817  698181 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.00921085s)
	I0725 19:34:06.957045  698181 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.009550961s)
	I0725 19:34:06.957063  698181 start.go:971] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0725 19:34:06.960334  698181 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0725 19:34:06.960601  698181 node_ready.go:35] waiting up to 6m0s for node "embed-certs-240166" to be "Ready" ...
	I0725 19:34:06.963201  698181 addons.go:510] duration metric: took 1.526164291s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0725 19:34:06.983393  698181 node_ready.go:49] node "embed-certs-240166" has status "Ready":"True"
	I0725 19:34:06.983419  698181 node_ready.go:38] duration metric: took 22.764089ms for node "embed-certs-240166" to be "Ready" ...
	I0725 19:34:06.983430  698181 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:34:07.004487  698181 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-2dw2z" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:07.469766  698181 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-240166" context rescaled to 1 replicas
	I0725 19:34:09.012241  698181 pod_ready.go:102] pod "coredns-7db6d8ff4d-2dw2z" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:06.807692  688921 cri.go:89] found id: "8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7"
	I0725 19:34:06.807723  688921 cri.go:89] found id: "a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8"
	I0725 19:34:06.807729  688921 cri.go:89] found id: ""
	I0725 19:34:06.807736  688921 logs.go:276] 2 containers: [8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7 a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8]
	I0725 19:34:06.807794  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:06.818733  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:06.822775  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0725 19:34:06.822852  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 19:34:06.888510  688921 cri.go:89] found id: "1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967"
	I0725 19:34:06.888542  688921 cri.go:89] found id: "bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6"
	I0725 19:34:06.888547  688921 cri.go:89] found id: ""
	I0725 19:34:06.888557  688921 logs.go:276] 2 containers: [1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967 bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6]
	I0725 19:34:06.888612  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:06.892832  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:06.897974  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0725 19:34:06.898094  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 19:34:06.959812  688921 cri.go:89] found id: "aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05"
	I0725 19:34:06.959838  688921 cri.go:89] found id: "6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8"
	I0725 19:34:06.959844  688921 cri.go:89] found id: ""
	I0725 19:34:06.959851  688921 logs.go:276] 2 containers: [aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05 6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8]
	I0725 19:34:06.959956  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:06.964200  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:06.968411  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0725 19:34:06.968494  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 19:34:07.033397  688921 cri.go:89] found id: "fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1"
	I0725 19:34:07.033436  688921 cri.go:89] found id: "2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34"
	I0725 19:34:07.033442  688921 cri.go:89] found id: ""
	I0725 19:34:07.033450  688921 logs.go:276] 2 containers: [fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1 2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34]
	I0725 19:34:07.033516  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.043063  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.048413  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0725 19:34:07.048505  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 19:34:07.111832  688921 cri.go:89] found id: "09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98"
	I0725 19:34:07.111912  688921 cri.go:89] found id: "5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f"
	I0725 19:34:07.111931  688921 cri.go:89] found id: ""
	I0725 19:34:07.111952  688921 logs.go:276] 2 containers: [09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98 5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f]
	I0725 19:34:07.112033  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.117008  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.121337  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 19:34:07.121457  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 19:34:07.187319  688921 cri.go:89] found id: "83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5"
	I0725 19:34:07.187384  688921 cri.go:89] found id: "092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb"
	I0725 19:34:07.187401  688921 cri.go:89] found id: ""
	I0725 19:34:07.187423  688921 logs.go:276] 2 containers: [83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5 092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb]
	I0725 19:34:07.187507  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.192267  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.196794  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0725 19:34:07.196914  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 19:34:07.271398  688921 cri.go:89] found id: "ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01"
	I0725 19:34:07.271471  688921 cri.go:89] found id: "687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8"
	I0725 19:34:07.271489  688921 cri.go:89] found id: ""
	I0725 19:34:07.271511  688921 logs.go:276] 2 containers: [ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01 687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8]
	I0725 19:34:07.271595  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.275933  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.281359  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0725 19:34:07.281487  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 19:34:07.377349  688921 cri.go:89] found id: "1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e"
	I0725 19:34:07.377413  688921 cri.go:89] found id: "6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985"
	I0725 19:34:07.377432  688921 cri.go:89] found id: ""
	I0725 19:34:07.377454  688921 logs.go:276] 2 containers: [1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e 6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985]
	I0725 19:34:07.377542  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.383264  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.388959  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 19:34:07.389087  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 19:34:07.461197  688921 cri.go:89] found id: "22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc"
	I0725 19:34:07.461216  688921 cri.go:89] found id: ""
	I0725 19:34:07.461223  688921 logs.go:276] 1 containers: [22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc]
	I0725 19:34:07.461279  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:07.465404  688921 logs.go:123] Gathering logs for storage-provisioner [1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e] ...
	I0725 19:34:07.465428  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e"
	I0725 19:34:07.547164  688921 logs.go:123] Gathering logs for container status ...
	I0725 19:34:07.547233  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 19:34:07.624426  688921 logs.go:123] Gathering logs for kubelet ...
	I0725 19:34:07.624507  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0725 19:34:07.703403  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:42 old-k8s-version-262689 kubelet[657]: E0725 19:28:42.036852     657 reflector.go:138] object-"kube-system"/"kindnet-token-vxqld": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vxqld" is forbidden: User "system:node:old-k8s-version-262689" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-262689' and this object
	W0725 19:34:07.703640  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:42 old-k8s-version-262689 kubelet[657]: E0725 19:28:42.037214     657 reflector.go:138] object-"kube-system"/"kube-proxy-token-2gxrg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2gxrg" is forbidden: User "system:node:old-k8s-version-262689" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-262689' and this object
	W0725 19:34:07.707435  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:45 old-k8s-version-262689 kubelet[657]: E0725 19:28:45.149554     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:07.707634  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:46 old-k8s-version-262689 kubelet[657]: E0725 19:28:46.003537     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.710803  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:57 old-k8s-version-262689 kubelet[657]: E0725 19:28:57.257602     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:07.713278  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:09 old-k8s-version-262689 kubelet[657]: E0725 19:29:09.369957     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.713674  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:10 old-k8s-version-262689 kubelet[657]: E0725 19:29:10.383955     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.713885  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:11 old-k8s-version-262689 kubelet[657]: E0725 19:29:11.215410     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.714611  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:16 old-k8s-version-262689 kubelet[657]: E0725 19:29:16.271158     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.715106  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:17 old-k8s-version-262689 kubelet[657]: E0725 19:29:17.406387     657 pod_workers.go:191] Error syncing pod 192a4c32-53cd-4ce7-ba80-a523469b645d ("storage-provisioner_kube-system(192a4c32-53cd-4ce7-ba80-a523469b645d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(192a4c32-53cd-4ce7-ba80-a523469b645d)"
	W0725 19:34:07.718311  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:26 old-k8s-version-262689 kubelet[657]: E0725 19:29:26.224127     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:07.719087  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:29 old-k8s-version-262689 kubelet[657]: E0725 19:29:29.440381     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.719629  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:36 old-k8s-version-262689 kubelet[657]: E0725 19:29:36.271666     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.719827  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:37 old-k8s-version-262689 kubelet[657]: E0725 19:29:37.215365     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.720195  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:51 old-k8s-version-262689 kubelet[657]: E0725 19:29:51.215710     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.720693  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:51 old-k8s-version-262689 kubelet[657]: E0725 19:29:51.524343     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.721057  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:56 old-k8s-version-262689 kubelet[657]: E0725 19:29:56.271047     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.721244  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:04 old-k8s-version-262689 kubelet[657]: E0725 19:30:04.215943     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.721572  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:11 old-k8s-version-262689 kubelet[657]: E0725 19:30:11.215709     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.724232  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:19 old-k8s-version-262689 kubelet[657]: E0725 19:30:19.231937     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:07.724588  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:25 old-k8s-version-262689 kubelet[657]: E0725 19:30:25.214643     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.724789  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:32 old-k8s-version-262689 kubelet[657]: E0725 19:30:32.216336     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.725458  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:40 old-k8s-version-262689 kubelet[657]: E0725 19:30:40.656202     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.725659  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:46 old-k8s-version-262689 kubelet[657]: E0725 19:30:46.215221     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.726057  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:46 old-k8s-version-262689 kubelet[657]: E0725 19:30:46.271828     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.726400  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:01 old-k8s-version-262689 kubelet[657]: E0725 19:31:01.215458     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.726633  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:01 old-k8s-version-262689 kubelet[657]: E0725 19:31:01.215611     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.726841  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:13 old-k8s-version-262689 kubelet[657]: E0725 19:31:13.215172     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.727236  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:14 old-k8s-version-262689 kubelet[657]: E0725 19:31:14.214851     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.727622  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:26 old-k8s-version-262689 kubelet[657]: E0725 19:31:26.215358     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.727848  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:27 old-k8s-version-262689 kubelet[657]: E0725 19:31:27.215041     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.728224  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:37 old-k8s-version-262689 kubelet[657]: E0725 19:31:37.215248     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.731097  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:40 old-k8s-version-262689 kubelet[657]: E0725 19:31:40.223791     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:07.731472  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:52 old-k8s-version-262689 kubelet[657]: E0725 19:31:52.217037     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.731662  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:54 old-k8s-version-262689 kubelet[657]: E0725 19:31:54.225403     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.731924  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:05 old-k8s-version-262689 kubelet[657]: E0725 19:32:05.215333     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.732602  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:07 old-k8s-version-262689 kubelet[657]: E0725 19:32:07.888105     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.732942  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:16 old-k8s-version-262689 kubelet[657]: E0725 19:32:16.271463     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.733127  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:19 old-k8s-version-262689 kubelet[657]: E0725 19:32:19.215018     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.733595  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:28 old-k8s-version-262689 kubelet[657]: E0725 19:32:28.214662     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.733787  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:31 old-k8s-version-262689 kubelet[657]: E0725 19:32:31.215144     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.734174  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:41 old-k8s-version-262689 kubelet[657]: E0725 19:32:41.214882     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.734408  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:46 old-k8s-version-262689 kubelet[657]: E0725 19:32:46.215198     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.734844  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:52 old-k8s-version-262689 kubelet[657]: E0725 19:32:52.215208     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.735085  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:59 old-k8s-version-262689 kubelet[657]: E0725 19:32:59.215146     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.735513  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:06 old-k8s-version-262689 kubelet[657]: E0725 19:33:06.214773     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.735733  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:12 old-k8s-version-262689 kubelet[657]: E0725 19:33:12.219911     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.736070  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:17 old-k8s-version-262689 kubelet[657]: E0725 19:33:17.215219     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.736255  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:25 old-k8s-version-262689 kubelet[657]: E0725 19:33:25.215021     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.736581  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:32 old-k8s-version-262689 kubelet[657]: E0725 19:33:32.216211     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.736764  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:38 old-k8s-version-262689 kubelet[657]: E0725 19:33:38.215054     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.737090  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:45 old-k8s-version-262689 kubelet[657]: E0725 19:33:45.216141     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.737377  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:50 old-k8s-version-262689 kubelet[657]: E0725 19:33:50.215161     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:07.737718  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:58 old-k8s-version-262689 kubelet[657]: E0725 19:33:58.214660     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:07.737917  688921 logs.go:138] Found kubelet problem: Jul 25 19:34:03 old-k8s-version-262689 kubelet[657]: E0725 19:34:03.215098     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0725 19:34:07.737927  688921 logs.go:123] Gathering logs for etcd [1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967] ...
	I0725 19:34:07.737943  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967"
	I0725 19:34:07.822664  688921 logs.go:123] Gathering logs for kube-scheduler [fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1] ...
	I0725 19:34:07.822740  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1"
	I0725 19:34:07.879095  688921 logs.go:123] Gathering logs for kindnet [ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01] ...
	I0725 19:34:07.879168  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01"
	I0725 19:34:07.955648  688921 logs.go:123] Gathering logs for kindnet [687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8] ...
	I0725 19:34:07.955730  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8"
	I0725 19:34:08.037629  688921 logs.go:123] Gathering logs for kube-apiserver [8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7] ...
	I0725 19:34:08.037704  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7"
	I0725 19:34:08.114974  688921 logs.go:123] Gathering logs for kube-scheduler [2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34] ...
	I0725 19:34:08.115040  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34"
	I0725 19:34:08.198695  688921 logs.go:123] Gathering logs for kube-controller-manager [83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5] ...
	I0725 19:34:08.198774  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5"
	I0725 19:34:08.332924  688921 logs.go:123] Gathering logs for kube-controller-manager [092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb] ...
	I0725 19:34:08.333002  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb"
	I0725 19:34:08.461781  688921 logs.go:123] Gathering logs for kube-proxy [5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f] ...
	I0725 19:34:08.461879  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f"
	I0725 19:34:08.540031  688921 logs.go:123] Gathering logs for storage-provisioner [6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985] ...
	I0725 19:34:08.540060  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985"
	I0725 19:34:08.583359  688921 logs.go:123] Gathering logs for containerd ...
	I0725 19:34:08.583437  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0725 19:34:08.659982  688921 logs.go:123] Gathering logs for dmesg ...
	I0725 19:34:08.660061  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 19:34:08.680183  688921 logs.go:123] Gathering logs for kube-apiserver [a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8] ...
	I0725 19:34:08.680216  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8"
	I0725 19:34:08.758790  688921 logs.go:123] Gathering logs for coredns [6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8] ...
	I0725 19:34:08.758823  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8"
	I0725 19:34:08.810366  688921 logs.go:123] Gathering logs for kube-proxy [09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98] ...
	I0725 19:34:08.810401  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98"
	I0725 19:34:08.855821  688921 logs.go:123] Gathering logs for describe nodes ...
	I0725 19:34:08.855850  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 19:34:09.049486  688921 logs.go:123] Gathering logs for etcd [bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6] ...
	I0725 19:34:09.049516  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6"
	I0725 19:34:09.101862  688921 logs.go:123] Gathering logs for coredns [aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05] ...
	I0725 19:34:09.101895  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05"
	I0725 19:34:09.150649  688921 logs.go:123] Gathering logs for kubernetes-dashboard [22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc] ...
	I0725 19:34:09.150683  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc"
	I0725 19:34:09.195050  688921 out.go:304] Setting ErrFile to fd 2...
	I0725 19:34:09.195080  688921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0725 19:34:09.195179  688921 out.go:239] X Problems detected in kubelet:
	W0725 19:34:09.195327  688921 out.go:239]   Jul 25 19:33:38 old-k8s-version-262689 kubelet[657]: E0725 19:33:38.215054     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:09.195345  688921 out.go:239]   Jul 25 19:33:45 old-k8s-version-262689 kubelet[657]: E0725 19:33:45.216141     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:09.195424  688921 out.go:239]   Jul 25 19:33:50 old-k8s-version-262689 kubelet[657]: E0725 19:33:50.215161     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:09.195440  688921 out.go:239]   Jul 25 19:33:58 old-k8s-version-262689 kubelet[657]: E0725 19:33:58.214660     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:09.195465  688921 out.go:239]   Jul 25 19:34:03 old-k8s-version-262689 kubelet[657]: E0725 19:34:03.215098     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0725 19:34:09.195473  688921 out.go:304] Setting ErrFile to fd 2...
	I0725 19:34:09.195484  688921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:34:11.511518  698181 pod_ready.go:102] pod "coredns-7db6d8ff4d-2dw2z" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:13.511969  698181 pod_ready.go:102] pod "coredns-7db6d8ff4d-2dw2z" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:16.013214  698181 pod_ready.go:102] pod "coredns-7db6d8ff4d-2dw2z" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:18.014038  698181 pod_ready.go:102] pod "coredns-7db6d8ff4d-2dw2z" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:20.512482  698181 pod_ready.go:102] pod "coredns-7db6d8ff4d-2dw2z" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:19.196744  688921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 19:34:19.209410  688921 api_server.go:72] duration metric: took 5m55.042327442s to wait for apiserver process to appear ...
	I0725 19:34:19.209435  688921 api_server.go:88] waiting for apiserver healthz status ...
	I0725 19:34:19.209472  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0725 19:34:19.209531  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0725 19:34:19.247922  688921 cri.go:89] found id: "8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7"
	I0725 19:34:19.247942  688921 cri.go:89] found id: "a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8"
	I0725 19:34:19.247947  688921 cri.go:89] found id: ""
	I0725 19:34:19.247954  688921 logs.go:276] 2 containers: [8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7 a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8]
	I0725 19:34:19.248012  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.252109  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.255778  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0725 19:34:19.255850  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0725 19:34:19.295831  688921 cri.go:89] found id: "1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967"
	I0725 19:34:19.295853  688921 cri.go:89] found id: "bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6"
	I0725 19:34:19.295859  688921 cri.go:89] found id: ""
	I0725 19:34:19.295866  688921 logs.go:276] 2 containers: [1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967 bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6]
	I0725 19:34:19.295924  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.300077  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.303811  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0725 19:34:19.303883  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0725 19:34:19.349146  688921 cri.go:89] found id: "aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05"
	I0725 19:34:19.349170  688921 cri.go:89] found id: "6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8"
	I0725 19:34:19.349176  688921 cri.go:89] found id: ""
	I0725 19:34:19.349183  688921 logs.go:276] 2 containers: [aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05 6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8]
	I0725 19:34:19.349247  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.353162  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.356919  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0725 19:34:19.357009  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0725 19:34:19.396994  688921 cri.go:89] found id: "fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1"
	I0725 19:34:19.397014  688921 cri.go:89] found id: "2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34"
	I0725 19:34:19.397018  688921 cri.go:89] found id: ""
	I0725 19:34:19.397025  688921 logs.go:276] 2 containers: [fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1 2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34]
	I0725 19:34:19.397084  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.401204  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.405026  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0725 19:34:19.405158  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0725 19:34:19.446347  688921 cri.go:89] found id: "09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98"
	I0725 19:34:19.446370  688921 cri.go:89] found id: "5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f"
	I0725 19:34:19.446375  688921 cri.go:89] found id: ""
	I0725 19:34:19.446383  688921 logs.go:276] 2 containers: [09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98 5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f]
	I0725 19:34:19.446443  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.450086  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.453624  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0725 19:34:19.453731  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0725 19:34:19.492956  688921 cri.go:89] found id: "83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5"
	I0725 19:34:19.492981  688921 cri.go:89] found id: "092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb"
	I0725 19:34:19.492985  688921 cri.go:89] found id: ""
	I0725 19:34:19.492993  688921 logs.go:276] 2 containers: [83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5 092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb]
	I0725 19:34:19.493051  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.497081  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.500953  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0725 19:34:19.501052  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0725 19:34:19.546766  688921 cri.go:89] found id: "ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01"
	I0725 19:34:19.546791  688921 cri.go:89] found id: "687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8"
	I0725 19:34:19.546797  688921 cri.go:89] found id: ""
	I0725 19:34:19.546804  688921 logs.go:276] 2 containers: [ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01 687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8]
	I0725 19:34:19.546860  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.550701  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.554230  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0725 19:34:19.554305  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0725 19:34:19.594222  688921 cri.go:89] found id: "1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e"
	I0725 19:34:19.594256  688921 cri.go:89] found id: "6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985"
	I0725 19:34:19.594261  688921 cri.go:89] found id: ""
	I0725 19:34:19.594269  688921 logs.go:276] 2 containers: [1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e 6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985]
	I0725 19:34:19.594337  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.598412  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.602080  688921 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0725 19:34:19.602161  688921 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0725 19:34:19.643928  688921 cri.go:89] found id: "22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc"
	I0725 19:34:19.643955  688921 cri.go:89] found id: ""
	I0725 19:34:19.643963  688921 logs.go:276] 1 containers: [22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc]
	I0725 19:34:19.644027  688921 ssh_runner.go:195] Run: which crictl
	I0725 19:34:19.647751  688921 logs.go:123] Gathering logs for container status ...
	I0725 19:34:19.647778  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0725 19:34:19.690394  688921 logs.go:123] Gathering logs for etcd [1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967] ...
	I0725 19:34:19.690510  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967"
	I0725 19:34:19.732117  688921 logs.go:123] Gathering logs for kube-proxy [09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98] ...
	I0725 19:34:19.732146  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98"
	I0725 19:34:19.780472  688921 logs.go:123] Gathering logs for kube-controller-manager [83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5] ...
	I0725 19:34:19.780499  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5"
	I0725 19:34:19.851807  688921 logs.go:123] Gathering logs for kindnet [687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8] ...
	I0725 19:34:19.851890  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8"
	I0725 19:34:19.909403  688921 logs.go:123] Gathering logs for storage-provisioner [1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e] ...
	I0725 19:34:19.909441  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e"
	I0725 19:34:19.947085  688921 logs.go:123] Gathering logs for kubernetes-dashboard [22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc] ...
	I0725 19:34:19.947112  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc"
	I0725 19:34:19.989416  688921 logs.go:123] Gathering logs for kindnet [ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01] ...
	I0725 19:34:19.989441  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01"
	I0725 19:34:20.080244  688921 logs.go:123] Gathering logs for containerd ...
	I0725 19:34:20.080284  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0725 19:34:20.148363  688921 logs.go:123] Gathering logs for kubelet ...
	I0725 19:34:20.148399  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0725 19:34:20.209971  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:42 old-k8s-version-262689 kubelet[657]: E0725 19:28:42.036852     657 reflector.go:138] object-"kube-system"/"kindnet-token-vxqld": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vxqld" is forbidden: User "system:node:old-k8s-version-262689" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-262689' and this object
	W0725 19:34:20.210213  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:42 old-k8s-version-262689 kubelet[657]: E0725 19:28:42.037214     657 reflector.go:138] object-"kube-system"/"kube-proxy-token-2gxrg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-2gxrg" is forbidden: User "system:node:old-k8s-version-262689" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-262689' and this object
	W0725 19:34:20.213793  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:45 old-k8s-version-262689 kubelet[657]: E0725 19:28:45.149554     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:20.215174  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:46 old-k8s-version-262689 kubelet[657]: E0725 19:28:46.003537     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.218024  688921 logs.go:138] Found kubelet problem: Jul 25 19:28:57 old-k8s-version-262689 kubelet[657]: E0725 19:28:57.257602     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:20.220231  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:09 old-k8s-version-262689 kubelet[657]: E0725 19:29:09.369957     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.220930  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:10 old-k8s-version-262689 kubelet[657]: E0725 19:29:10.383955     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.221254  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:11 old-k8s-version-262689 kubelet[657]: E0725 19:29:11.215410     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.222117  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:16 old-k8s-version-262689 kubelet[657]: E0725 19:29:16.271158     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.222720  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:17 old-k8s-version-262689 kubelet[657]: E0725 19:29:17.406387     657 pod_workers.go:191] Error syncing pod 192a4c32-53cd-4ce7-ba80-a523469b645d ("storage-provisioner_kube-system(192a4c32-53cd-4ce7-ba80-a523469b645d)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(192a4c32-53cd-4ce7-ba80-a523469b645d)"
	W0725 19:34:20.226773  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:26 old-k8s-version-262689 kubelet[657]: E0725 19:29:26.224127     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:20.227465  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:29 old-k8s-version-262689 kubelet[657]: E0725 19:29:29.440381     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.227957  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:36 old-k8s-version-262689 kubelet[657]: E0725 19:29:36.271666     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.228162  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:37 old-k8s-version-262689 kubelet[657]: E0725 19:29:37.215365     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.228549  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:51 old-k8s-version-262689 kubelet[657]: E0725 19:29:51.215710     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.229042  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:51 old-k8s-version-262689 kubelet[657]: E0725 19:29:51.524343     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.229493  688921 logs.go:138] Found kubelet problem: Jul 25 19:29:56 old-k8s-version-262689 kubelet[657]: E0725 19:29:56.271047     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.229718  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:04 old-k8s-version-262689 kubelet[657]: E0725 19:30:04.215943     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.230164  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:11 old-k8s-version-262689 kubelet[657]: E0725 19:30:11.215709     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.232845  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:19 old-k8s-version-262689 kubelet[657]: E0725 19:30:19.231937     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:20.233247  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:25 old-k8s-version-262689 kubelet[657]: E0725 19:30:25.214643     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.233529  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:32 old-k8s-version-262689 kubelet[657]: E0725 19:30:32.216336     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.234139  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:40 old-k8s-version-262689 kubelet[657]: E0725 19:30:40.656202     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.234363  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:46 old-k8s-version-262689 kubelet[657]: E0725 19:30:46.215221     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.234728  688921 logs.go:138] Found kubelet problem: Jul 25 19:30:46 old-k8s-version-262689 kubelet[657]: E0725 19:30:46.271828     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.235078  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:01 old-k8s-version-262689 kubelet[657]: E0725 19:31:01.215458     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.235295  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:01 old-k8s-version-262689 kubelet[657]: E0725 19:31:01.215611     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.235532  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:13 old-k8s-version-262689 kubelet[657]: E0725 19:31:13.215172     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.235906  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:14 old-k8s-version-262689 kubelet[657]: E0725 19:31:14.214851     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.236251  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:26 old-k8s-version-262689 kubelet[657]: E0725 19:31:26.215358     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.236454  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:27 old-k8s-version-262689 kubelet[657]: E0725 19:31:27.215041     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.236868  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:37 old-k8s-version-262689 kubelet[657]: E0725 19:31:37.215248     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.240631  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:40 old-k8s-version-262689 kubelet[657]: E0725 19:31:40.223791     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W0725 19:34:20.241195  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:52 old-k8s-version-262689 kubelet[657]: E0725 19:31:52.217037     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.241446  688921 logs.go:138] Found kubelet problem: Jul 25 19:31:54 old-k8s-version-262689 kubelet[657]: E0725 19:31:54.225403     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.241652  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:05 old-k8s-version-262689 kubelet[657]: E0725 19:32:05.215333     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.242309  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:07 old-k8s-version-262689 kubelet[657]: E0725 19:32:07.888105     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.242659  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:16 old-k8s-version-262689 kubelet[657]: E0725 19:32:16.271463     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.242899  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:19 old-k8s-version-262689 kubelet[657]: E0725 19:32:19.215018     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.243284  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:28 old-k8s-version-262689 kubelet[657]: E0725 19:32:28.214662     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.243635  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:31 old-k8s-version-262689 kubelet[657]: E0725 19:32:31.215144     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.244269  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:41 old-k8s-version-262689 kubelet[657]: E0725 19:32:41.214882     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.244545  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:46 old-k8s-version-262689 kubelet[657]: E0725 19:32:46.215198     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.244932  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:52 old-k8s-version-262689 kubelet[657]: E0725 19:32:52.215208     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.245138  688921 logs.go:138] Found kubelet problem: Jul 25 19:32:59 old-k8s-version-262689 kubelet[657]: E0725 19:32:59.215146     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.245498  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:06 old-k8s-version-262689 kubelet[657]: E0725 19:33:06.214773     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.245703  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:12 old-k8s-version-262689 kubelet[657]: E0725 19:33:12.219911     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.246101  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:17 old-k8s-version-262689 kubelet[657]: E0725 19:33:17.215219     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.246327  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:25 old-k8s-version-262689 kubelet[657]: E0725 19:33:25.215021     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.246766  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:32 old-k8s-version-262689 kubelet[657]: E0725 19:33:32.216211     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.247012  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:38 old-k8s-version-262689 kubelet[657]: E0725 19:33:38.215054     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.247382  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:45 old-k8s-version-262689 kubelet[657]: E0725 19:33:45.216141     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.247585  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:50 old-k8s-version-262689 kubelet[657]: E0725 19:33:50.215161     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.247936  688921 logs.go:138] Found kubelet problem: Jul 25 19:33:58 old-k8s-version-262689 kubelet[657]: E0725 19:33:58.214660     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.248172  688921 logs.go:138] Found kubelet problem: Jul 25 19:34:03 old-k8s-version-262689 kubelet[657]: E0725 19:34:03.215098     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.248568  688921 logs.go:138] Found kubelet problem: Jul 25 19:34:11 old-k8s-version-262689 kubelet[657]: E0725 19:34:11.214739     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.248773  688921 logs.go:138] Found kubelet problem: Jul 25 19:34:14 old-k8s-version-262689 kubelet[657]: E0725 19:34:14.215104     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0725 19:34:20.248804  688921 logs.go:123] Gathering logs for describe nodes ...
	I0725 19:34:20.248838  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0725 19:34:20.394413  688921 logs.go:123] Gathering logs for kube-apiserver [8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7] ...
	I0725 19:34:20.394445  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7"
	I0725 19:34:20.454291  688921 logs.go:123] Gathering logs for kube-apiserver [a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8] ...
	I0725 19:34:20.454325  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8"
	I0725 19:34:20.507516  688921 logs.go:123] Gathering logs for coredns [6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8] ...
	I0725 19:34:20.507595  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8"
	I0725 19:34:20.549222  688921 logs.go:123] Gathering logs for kube-scheduler [fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1] ...
	I0725 19:34:20.549250  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1"
	I0725 19:34:20.600314  688921 logs.go:123] Gathering logs for etcd [bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6] ...
	I0725 19:34:20.600352  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6"
	I0725 19:34:20.657965  688921 logs.go:123] Gathering logs for kube-proxy [5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f] ...
	I0725 19:34:20.657998  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f"
	I0725 19:34:20.702882  688921 logs.go:123] Gathering logs for kube-controller-manager [092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb] ...
	I0725 19:34:20.702913  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb"
	I0725 19:34:20.766370  688921 logs.go:123] Gathering logs for dmesg ...
	I0725 19:34:20.766446  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0725 19:34:20.794437  688921 logs.go:123] Gathering logs for coredns [aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05] ...
	I0725 19:34:20.794537  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05"
	I0725 19:34:20.838887  688921 logs.go:123] Gathering logs for kube-scheduler [2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34] ...
	I0725 19:34:20.839007  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34"
	I0725 19:34:20.898415  688921 logs.go:123] Gathering logs for storage-provisioner [6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985] ...
	I0725 19:34:20.898447  688921 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985"
	I0725 19:34:20.947486  688921 out.go:304] Setting ErrFile to fd 2...
	I0725 19:34:20.947512  688921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0725 19:34:20.947573  688921 out.go:239] X Problems detected in kubelet:
	W0725 19:34:20.947582  688921 out.go:239]   Jul 25 19:33:50 old-k8s-version-262689 kubelet[657]: E0725 19:33:50.215161     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.947592  688921 out.go:239]   Jul 25 19:33:58 old-k8s-version-262689 kubelet[657]: E0725 19:33:58.214660     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.947606  688921 out.go:239]   Jul 25 19:34:03 old-k8s-version-262689 kubelet[657]: E0725 19:34:03.215098     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0725 19:34:20.947612  688921 out.go:239]   Jul 25 19:34:11 old-k8s-version-262689 kubelet[657]: E0725 19:34:11.214739     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	W0725 19:34:20.947623  688921 out.go:239]   Jul 25 19:34:14 old-k8s-version-262689 kubelet[657]: E0725 19:34:14.215104     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0725 19:34:20.947630  688921 out.go:304] Setting ErrFile to fd 2...
	I0725 19:34:20.947635  688921 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:34:23.013238  698181 pod_ready.go:102] pod "coredns-7db6d8ff4d-2dw2z" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:25.016359  698181 pod_ready.go:102] pod "coredns-7db6d8ff4d-2dw2z" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:30.948693  688921 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0725 19:34:30.959121  688921 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0725 19:34:30.961497  688921 out.go:177] 
	W0725 19:34:30.963511  688921 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0725 19:34:30.963612  688921 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0725 19:34:30.963654  688921 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0725 19:34:30.963661  688921 out.go:239] * 
	W0725 19:34:30.964606  688921 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0725 19:34:30.966662  688921 out.go:177] 
	I0725 19:34:27.510924  698181 pod_ready.go:102] pod "coredns-7db6d8ff4d-2dw2z" in "kube-system" namespace has status "Ready":"False"
	I0725 19:34:29.511238  698181 pod_ready.go:92] pod "coredns-7db6d8ff4d-2dw2z" in "kube-system" namespace has status "Ready":"True"
	I0725 19:34:29.511265  698181 pod_ready.go:81] duration metric: took 22.506691607s for pod "coredns-7db6d8ff4d-2dw2z" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:29.511278  698181 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-7cfk5" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:29.515203  698181 pod_ready.go:97] error getting pod "coredns-7db6d8ff4d-7cfk5" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-7cfk5" not found
	I0725 19:34:29.515231  698181 pod_ready.go:81] duration metric: took 3.94614ms for pod "coredns-7db6d8ff4d-7cfk5" in "kube-system" namespace to be "Ready" ...
	E0725 19:34:29.515243  698181 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-7db6d8ff4d-7cfk5" in "kube-system" namespace (skipping!): pods "coredns-7db6d8ff4d-7cfk5" not found
	I0725 19:34:29.515250  698181 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-240166" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:29.523254  698181 pod_ready.go:92] pod "etcd-embed-certs-240166" in "kube-system" namespace has status "Ready":"True"
	I0725 19:34:29.523283  698181 pod_ready.go:81] duration metric: took 8.024904ms for pod "etcd-embed-certs-240166" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:29.523298  698181 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-240166" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:29.530149  698181 pod_ready.go:92] pod "kube-apiserver-embed-certs-240166" in "kube-system" namespace has status "Ready":"True"
	I0725 19:34:29.530174  698181 pod_ready.go:81] duration metric: took 6.867689ms for pod "kube-apiserver-embed-certs-240166" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:29.530186  698181 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-240166" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:29.536286  698181 pod_ready.go:92] pod "kube-controller-manager-embed-certs-240166" in "kube-system" namespace has status "Ready":"True"
	I0725 19:34:29.536321  698181 pod_ready.go:81] duration metric: took 6.11564ms for pod "kube-controller-manager-embed-certs-240166" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:29.536334  698181 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xhbp5" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:29.708404  698181 pod_ready.go:92] pod "kube-proxy-xhbp5" in "kube-system" namespace has status "Ready":"True"
	I0725 19:34:29.708429  698181 pod_ready.go:81] duration metric: took 172.087683ms for pod "kube-proxy-xhbp5" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:29.708441  698181 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-240166" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:30.111665  698181 pod_ready.go:92] pod "kube-scheduler-embed-certs-240166" in "kube-system" namespace has status "Ready":"True"
	I0725 19:34:30.111701  698181 pod_ready.go:81] duration metric: took 403.251052ms for pod "kube-scheduler-embed-certs-240166" in "kube-system" namespace to be "Ready" ...
	I0725 19:34:30.111717  698181 pod_ready.go:38] duration metric: took 23.128275631s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0725 19:34:30.111733  698181 api_server.go:52] waiting for apiserver process to appear ...
	I0725 19:34:30.111824  698181 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 19:34:30.128717  698181 api_server.go:72] duration metric: took 24.692109699s to wait for apiserver process to appear ...
	I0725 19:34:30.128746  698181 api_server.go:88] waiting for apiserver healthz status ...
	I0725 19:34:30.128769  698181 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0725 19:34:30.137513  698181 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0725 19:34:30.139011  698181 api_server.go:141] control plane version: v1.30.3
	I0725 19:34:30.139047  698181 api_server.go:131] duration metric: took 10.293101ms to wait for apiserver health ...
	I0725 19:34:30.139058  698181 system_pods.go:43] waiting for kube-system pods to appear ...
	I0725 19:34:30.312461  698181 system_pods.go:59] 8 kube-system pods found
	I0725 19:34:30.312498  698181 system_pods.go:61] "coredns-7db6d8ff4d-2dw2z" [4e91d1a1-f51d-4bc9-a312-ab4e94121022] Running
	I0725 19:34:30.312504  698181 system_pods.go:61] "etcd-embed-certs-240166" [4c90e835-3da3-4c08-8e1b-fa3c9f8960a3] Running
	I0725 19:34:30.312513  698181 system_pods.go:61] "kindnet-7lfwb" [20820be7-3917-4f5c-a617-8f62a5ce1a98] Running
	I0725 19:34:30.312517  698181 system_pods.go:61] "kube-apiserver-embed-certs-240166" [189c8487-f49e-40aa-a380-bd16a45294a4] Running
	I0725 19:34:30.312522  698181 system_pods.go:61] "kube-controller-manager-embed-certs-240166" [d15a5860-6d3b-41db-8952-42d727fafae2] Running
	I0725 19:34:30.312526  698181 system_pods.go:61] "kube-proxy-xhbp5" [e21b820d-585f-4afc-9c1e-25b50d72693c] Running
	I0725 19:34:30.312568  698181 system_pods.go:61] "kube-scheduler-embed-certs-240166" [9c92f69c-f335-4f1b-b312-e867ddd98272] Running
	I0725 19:34:30.312582  698181 system_pods.go:61] "storage-provisioner" [0b38db65-0f67-4545-8b9c-3272fb58f327] Running
	I0725 19:34:30.312587  698181 system_pods.go:74] duration metric: took 173.52404ms to wait for pod list to return data ...
	I0725 19:34:30.312595  698181 default_sa.go:34] waiting for default service account to be created ...
	I0725 19:34:30.508093  698181 default_sa.go:45] found service account: "default"
	I0725 19:34:30.508122  698181 default_sa.go:55] duration metric: took 195.509931ms for default service account to be created ...
	I0725 19:34:30.508133  698181 system_pods.go:116] waiting for k8s-apps to be running ...
	I0725 19:34:30.712727  698181 system_pods.go:86] 8 kube-system pods found
	I0725 19:34:30.712759  698181 system_pods.go:89] "coredns-7db6d8ff4d-2dw2z" [4e91d1a1-f51d-4bc9-a312-ab4e94121022] Running
	I0725 19:34:30.712766  698181 system_pods.go:89] "etcd-embed-certs-240166" [4c90e835-3da3-4c08-8e1b-fa3c9f8960a3] Running
	I0725 19:34:30.712770  698181 system_pods.go:89] "kindnet-7lfwb" [20820be7-3917-4f5c-a617-8f62a5ce1a98] Running
	I0725 19:34:30.712775  698181 system_pods.go:89] "kube-apiserver-embed-certs-240166" [189c8487-f49e-40aa-a380-bd16a45294a4] Running
	I0725 19:34:30.712780  698181 system_pods.go:89] "kube-controller-manager-embed-certs-240166" [d15a5860-6d3b-41db-8952-42d727fafae2] Running
	I0725 19:34:30.712821  698181 system_pods.go:89] "kube-proxy-xhbp5" [e21b820d-585f-4afc-9c1e-25b50d72693c] Running
	I0725 19:34:30.712831  698181 system_pods.go:89] "kube-scheduler-embed-certs-240166" [9c92f69c-f335-4f1b-b312-e867ddd98272] Running
	I0725 19:34:30.712835  698181 system_pods.go:89] "storage-provisioner" [0b38db65-0f67-4545-8b9c-3272fb58f327] Running
	I0725 19:34:30.712842  698181 system_pods.go:126] duration metric: took 204.703547ms to wait for k8s-apps to be running ...
	I0725 19:34:30.712849  698181 system_svc.go:44] waiting for kubelet service to be running ....
	I0725 19:34:30.712930  698181 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 19:34:30.725052  698181 system_svc.go:56] duration metric: took 12.19142ms WaitForService to wait for kubelet
	I0725 19:34:30.725123  698181 kubeadm.go:582] duration metric: took 25.288521634s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0725 19:34:30.725149  698181 node_conditions.go:102] verifying NodePressure condition ...
	I0725 19:34:30.908389  698181 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0725 19:34:30.908423  698181 node_conditions.go:123] node cpu capacity is 2
	I0725 19:34:30.908441  698181 node_conditions.go:105] duration metric: took 183.284854ms to run NodePressure ...
	I0725 19:34:30.908453  698181 start.go:241] waiting for startup goroutines ...
	I0725 19:34:30.908461  698181 start.go:246] waiting for cluster config update ...
	I0725 19:34:30.908485  698181 start.go:255] writing updated cluster config ...
	I0725 19:34:30.908790  698181 ssh_runner.go:195] Run: rm -f paused
	I0725 19:34:31.011227  698181 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0725 19:34:31.018101  698181 out.go:177] * Done! kubectl is now configured to use "embed-certs-240166" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	8bd318dd74db9       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   068b7e93eeb68       dashboard-metrics-scraper-8d5bb5db8-49jgz
	1be6aaee701fc       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   96d7f668e0fa5       storage-provisioner
	22dd933593790       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   79b31dd96a47a       kubernetes-dashboard-cd95d586-nkqkw
	09fa596a52d12       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   e059235025f64       kube-proxy-srbcv
	b62507cdbdc5e       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   e56250afac817       busybox
	aa97a928188d1       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   035172f4b1f12       coredns-74ff55c5b-djgf4
	6ce89219b0ac0       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   96d7f668e0fa5       storage-provisioner
	ff3fdfb5b2d79       f42786f8afd22       5 minutes ago       Running             kindnet-cni                 1                   c0187f0a2471d       kindnet-dmcbc
	fac92b7ae7a4b       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   c2165b2a09c79       kube-scheduler-old-k8s-version-262689
	1cd92cb68a1c5       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   aa9b38650c1c0       etcd-old-k8s-version-262689
	83992657e3b87       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   abdada99bd82c       kube-controller-manager-old-k8s-version-262689
	8282e5e19263c       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   17bdf93d666de       kube-apiserver-old-k8s-version-262689
	6fe11bcfac9b6       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   494de20f4296e       busybox
	6bec8a2f6c401       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   89f9059ba5055       coredns-74ff55c5b-djgf4
	687cf4fb5ec2d       f42786f8afd22       7 minutes ago       Exited              kindnet-cni                 0                   1153ee1398057       kindnet-dmcbc
	5a8e2f746885e       25a5233254979       7 minutes ago       Exited              kube-proxy                  0                   ee0dfef3f2c09       kube-proxy-srbcv
	2a00adbe91bc4       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   2e9005a787b56       kube-scheduler-old-k8s-version-262689
	092cb7f61fd7d       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   cef425aa8e167       kube-controller-manager-old-k8s-version-262689
	a78a4cf991c24       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   45261c58db55a       kube-apiserver-old-k8s-version-262689
	bbbb23c4486cc       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   2e19fa398b988       etcd-old-k8s-version-262689
	
	
	==> containerd <==
	Jul 25 19:30:40 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:30:40.257524898Z" level=info msg="CreateContainer within sandbox \"068b7e93eeb68f96cac03b6f43d20598972a26327defc6a948bea085e8f1dea2\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"5f99d2a14b7c6e38618baa7187b8c9656dd0546f570fc09eada072d40c5a66b7\""
	Jul 25 19:30:40 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:30:40.259593042Z" level=info msg="StartContainer for \"5f99d2a14b7c6e38618baa7187b8c9656dd0546f570fc09eada072d40c5a66b7\""
	Jul 25 19:30:40 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:30:40.323289796Z" level=info msg="StartContainer for \"5f99d2a14b7c6e38618baa7187b8c9656dd0546f570fc09eada072d40c5a66b7\" returns successfully"
	Jul 25 19:30:40 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:30:40.351197732Z" level=info msg="shim disconnected" id=5f99d2a14b7c6e38618baa7187b8c9656dd0546f570fc09eada072d40c5a66b7 namespace=k8s.io
	Jul 25 19:30:40 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:30:40.351269993Z" level=warning msg="cleaning up after shim disconnected" id=5f99d2a14b7c6e38618baa7187b8c9656dd0546f570fc09eada072d40c5a66b7 namespace=k8s.io
	Jul 25 19:30:40 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:30:40.351281472Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jul 25 19:30:40 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:30:40.655096777Z" level=info msg="RemoveContainer for \"80bacb8fee6efe2ae8e2229b095cb84de4c3eb7d0d7c4d07e5488c9a7cd7a8de\""
	Jul 25 19:30:40 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:30:40.667884037Z" level=info msg="RemoveContainer for \"80bacb8fee6efe2ae8e2229b095cb84de4c3eb7d0d7c4d07e5488c9a7cd7a8de\" returns successfully"
	Jul 25 19:31:40 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:31:40.215454593Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 25 19:31:40 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:31:40.221597464Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Jul 25 19:31:40 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:31:40.223275784Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Jul 25 19:31:40 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:31:40.223368532Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jul 25 19:32:07 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:32:07.217404172Z" level=info msg="CreateContainer within sandbox \"068b7e93eeb68f96cac03b6f43d20598972a26327defc6a948bea085e8f1dea2\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Jul 25 19:32:07 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:32:07.230756343Z" level=info msg="CreateContainer within sandbox \"068b7e93eeb68f96cac03b6f43d20598972a26327defc6a948bea085e8f1dea2\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"8bd318dd74db97a95d70bc0d9930be8040e1c465947f8087a1f41f53a91b5f2b\""
	Jul 25 19:32:07 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:32:07.231504813Z" level=info msg="StartContainer for \"8bd318dd74db97a95d70bc0d9930be8040e1c465947f8087a1f41f53a91b5f2b\""
	Jul 25 19:32:07 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:32:07.294192393Z" level=info msg="StartContainer for \"8bd318dd74db97a95d70bc0d9930be8040e1c465947f8087a1f41f53a91b5f2b\" returns successfully"
	Jul 25 19:32:07 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:32:07.322524109Z" level=info msg="shim disconnected" id=8bd318dd74db97a95d70bc0d9930be8040e1c465947f8087a1f41f53a91b5f2b namespace=k8s.io
	Jul 25 19:32:07 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:32:07.322716211Z" level=warning msg="cleaning up after shim disconnected" id=8bd318dd74db97a95d70bc0d9930be8040e1c465947f8087a1f41f53a91b5f2b namespace=k8s.io
	Jul 25 19:32:07 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:32:07.322741844Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jul 25 19:32:07 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:32:07.888831911Z" level=info msg="RemoveContainer for \"5f99d2a14b7c6e38618baa7187b8c9656dd0546f570fc09eada072d40c5a66b7\""
	Jul 25 19:32:07 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:32:07.895316264Z" level=info msg="RemoveContainer for \"5f99d2a14b7c6e38618baa7187b8c9656dd0546f570fc09eada072d40c5a66b7\" returns successfully"
	Jul 25 19:34:27 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:34:27.215484345Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 25 19:34:27 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:34:27.232799010Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Jul 25 19:34:27 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:34:27.234409886Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Jul 25 19:34:27 old-k8s-version-262689 containerd[568]: time="2024-07-25T19:34:27.234439267Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [6bec8a2f6c401528e85ac844c3e4ceeb5eb110b79993a8dbb471973927177fb8] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:34825 - 33803 "HINFO IN 7787524366633514884.1458302932824696074. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.04454361s
	
	
	==> coredns [aa97a928188d1ef74702a4a9c3b633fdd11aba77013410428649b6f03a4dce05] <==
	I0725 19:29:16.809575       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-25 19:28:46.80802639 +0000 UTC m=+0.098858703) (total time: 30.001439165s):
	Trace[2019727887]: [30.001439165s] [30.001439165s] END
	E0725 19:29:16.809607       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0725 19:29:16.809618       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-25 19:28:46.798678329 +0000 UTC m=+0.089510642) (total time: 30.010918948s):
	Trace[939984059]: [30.010918948s] [30.010918948s] END
	I0725 19:29:16.809627       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-07-25 19:28:46.808503383 +0000 UTC m=+0.099335688) (total time: 30.001020386s):
	Trace[1427131847]: [30.001020386s] [30.001020386s] END
	E0725 19:29:16.809632       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	E0725 19:29:16.809631       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:56372 - 1249 "HINFO IN 5180671977042481159.5930065595496907180. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02055083s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-262689
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-262689
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6f15797740a09e0fb947959f5fd09f2e323bde5a
	                    minikube.k8s.io/name=old-k8s-version-262689
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_25T19_26_19_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 25 Jul 2024 19:26:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-262689
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 25 Jul 2024 19:34:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 25 Jul 2024 19:29:32 +0000   Thu, 25 Jul 2024 19:26:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 25 Jul 2024 19:29:32 +0000   Thu, 25 Jul 2024 19:26:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 25 Jul 2024 19:29:32 +0000   Thu, 25 Jul 2024 19:26:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 25 Jul 2024 19:29:32 +0000   Thu, 25 Jul 2024 19:26:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-262689
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 eff3ebf4acf646da955e7140c1a15fea
	  System UUID:                b2716ca6-790b-4543-b20c-6ea364ff35c0
	  Boot ID:                    6208173b-d514-4152-b2e9-119a649e8fe8
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.19
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 coredns-74ff55c5b-djgf4                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     7m58s
	  kube-system                 etcd-old-k8s-version-262689                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m5s
	  kube-system                 kindnet-dmcbc                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m58s
	  kube-system                 kube-apiserver-old-k8s-version-262689             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-controller-manager-old-k8s-version-262689    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 kube-proxy-srbcv                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m58s
	  kube-system                 kube-scheduler-old-k8s-version-262689             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m5s
	  kube-system                 metrics-server-9975d5f86-8g9gp                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m27s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m56s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-49jgz         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-nkqkw               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m25s (x5 over 8m26s)  kubelet     Node old-k8s-version-262689 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m25s (x5 over 8m26s)  kubelet     Node old-k8s-version-262689 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m25s (x5 over 8m26s)  kubelet     Node old-k8s-version-262689 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m6s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m5s                   kubelet     Node old-k8s-version-262689 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m5s                   kubelet     Node old-k8s-version-262689 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m5s                   kubelet     Node old-k8s-version-262689 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m5s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                7m58s                  kubelet     Node old-k8s-version-262689 status is now: NodeReady
	  Normal  Starting                 7m56s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m                     kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m (x8 over 6m)        kubelet     Node old-k8s-version-262689 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x8 over 6m)        kubelet     Node old-k8s-version-262689 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x7 over 6m)        kubelet     Node old-k8s-version-262689 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m                     kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m45s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001040] FS-Cache: O-key=[8] '0f6fed0000000000'
	[  +0.000713] FS-Cache: N-cookie c=0000001e [p=00000015 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=0000000057ab999f{9p.inode} n=000000003a1fc167
	[  +0.001042] FS-Cache: N-key=[8] '0f6fed0000000000'
	[  +0.007051] FS-Cache: Duplicate cookie detected
	[  +0.000703] FS-Cache: O-cookie c=00000018 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000981] FS-Cache: O-cookie d=0000000057ab999f{9p.inode} n=0000000040afcf4d
	[  +0.001044] FS-Cache: O-key=[8] '0f6fed0000000000'
	[  +0.000728] FS-Cache: N-cookie c=0000001f [p=00000015 fl=2 nc=0 na=1]
	[  +0.000936] FS-Cache: N-cookie d=0000000057ab999f{9p.inode} n=000000002aec574f
	[  +0.001060] FS-Cache: N-key=[8] '0f6fed0000000000'
	[  +2.902733] FS-Cache: Duplicate cookie detected
	[  +0.000731] FS-Cache: O-cookie c=00000016 [p=00000015 fl=226 nc=0 na=1]
	[  +0.000958] FS-Cache: O-cookie d=0000000057ab999f{9p.inode} n=000000004200c04f
	[  +0.001285] FS-Cache: O-key=[8] '0e6fed0000000000'
	[  +0.000753] FS-Cache: N-cookie c=00000021 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000959] FS-Cache: N-cookie d=0000000057ab999f{9p.inode} n=000000003a1fc167
	[  +0.001072] FS-Cache: N-key=[8] '0e6fed0000000000'
	[  +0.318613] FS-Cache: Duplicate cookie detected
	[  +0.000752] FS-Cache: O-cookie c=0000001b [p=00000015 fl=226 nc=0 na=1]
	[  +0.000982] FS-Cache: O-cookie d=0000000057ab999f{9p.inode} n=00000000b27dda33
	[  +0.001094] FS-Cache: O-key=[8] '146fed0000000000'
	[  +0.000730] FS-Cache: N-cookie c=00000022 [p=00000015 fl=2 nc=0 na=1]
	[  +0.000942] FS-Cache: N-cookie d=0000000057ab999f{9p.inode} n=000000002b202c68
	[  +0.001045] FS-Cache: N-key=[8] '146fed0000000000'
	
	
	==> etcd [1cd92cb68a1c5032b3e67bb3e60e54635f1eab52e0fda097eb7aaf23cbd2b967] <==
	2024-07-25 19:30:31.096448 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:30:41.096390 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:30:51.096605 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:31:01.096473 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:31:11.096451 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:31:21.096440 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:31:31.096577 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:31:41.096508 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:31:51.096541 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:32:01.098108 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:32:11.096730 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:32:21.096444 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:32:31.096575 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:32:41.096373 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:32:51.103888 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:33:01.096509 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:33:11.096380 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:33:21.096625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:33:31.096572 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:33:41.096423 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:33:51.096575 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:34:01.096408 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:34:11.096463 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:34:21.096722 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:34:31.096460 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [bbbb23c4486cc3d21d25ad34daca3cfc35fda6d80f6a0e9fcd87f77a79652bb6] <==
	2024-07-25 19:26:07.779446 I | embed: listening for peers on 192.168.85.2:2380
	2024-07-25 19:26:07.779689 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/07/25 19:26:08 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/07/25 19:26:08 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/07/25 19:26:08 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/07/25 19:26:08 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/07/25 19:26:08 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-07-25 19:26:08.460485 I | etcdserver: published {Name:old-k8s-version-262689 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-07-25 19:26:08.460753 I | embed: ready to serve client requests
	2024-07-25 19:26:08.462504 I | embed: serving client requests on 127.0.0.1:2379
	2024-07-25 19:26:08.462769 I | embed: ready to serve client requests
	2024-07-25 19:26:08.463017 I | etcdserver: setting up the initial cluster version to 3.4
	2024-07-25 19:26:08.469146 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-07-25 19:26:08.473547 I | etcdserver/api: enabled capabilities for version 3.4
	2024-07-25 19:26:08.487717 I | embed: serving client requests on 192.168.85.2:2379
	2024-07-25 19:26:30.854893 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:26:40.710354 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:26:50.710307 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:27:00.710776 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:27:10.710877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:27:20.710533 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:27:30.710281 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:27:40.710366 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:27:50.710363 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-25 19:28:00.710406 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 19:34:33 up  3:17,  0 users,  load average: 2.28, 2.31, 2.86
	Linux old-k8s-version-262689 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [687cf4fb5ec2da066d588dd1c4d731c898dfd63180c14127e4f87575f4f860a8] <==
	I0725 19:26:58.792568       1 main.go:299] handling current node
	I0725 19:27:08.792763       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:27:08.792973       1 main.go:299] handling current node
	W0725 19:27:11.636804       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 19:27:11.636844       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0725 19:27:12.480906       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0725 19:27:12.481002       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0725 19:27:13.192195       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0725 19:27:13.192445       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0725 19:27:18.792235       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:27:18.792525       1 main.go:299] handling current node
	I0725 19:27:28.791847       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:27:28.791885       1 main.go:299] handling current node
	I0725 19:27:38.792225       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:27:38.792295       1 main.go:299] handling current node
	I0725 19:27:48.791819       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:27:48.791856       1 main.go:299] handling current node
	W0725 19:27:55.060099       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0725 19:27:55.060136       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0725 19:27:56.545757       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 19:27:56.545803       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0725 19:27:58.792142       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:27:58.792247       1 main.go:299] handling current node
	W0725 19:28:01.451858       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0725 19:28:01.451903       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	
	
	==> kindnet [ff3fdfb5b2d793d52d00f84a970f452b5e38af8bf34c2152f15e70edaf6cfe01] <==
	I0725 19:33:16.708779       1 main.go:299] handling current node
	I0725 19:33:26.708442       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:33:26.708652       1 main.go:299] handling current node
	W0725 19:33:28.342099       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0725 19:33:28.342354       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0725 19:33:36.708751       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:33:36.708849       1 main.go:299] handling current node
	W0725 19:33:38.956208       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 19:33:38.956479       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0725 19:33:46.708682       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:33:46.708808       1 main.go:299] handling current node
	I0725 19:33:56.708436       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:33:56.708474       1 main.go:299] handling current node
	I0725 19:34:06.708252       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:34:06.708486       1 main.go:299] handling current node
	W0725 19:34:08.489871       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0725 19:34:08.489919       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0725 19:34:16.708303       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:34:16.708342       1 main.go:299] handling current node
	W0725 19:34:23.620509       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0725 19:34:23.620550       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0725 19:34:26.708718       1 main.go:295] Handling node with IPs: map[192.168.85.2:{}]
	I0725 19:34:26.708758       1 main.go:299] handling current node
	W0725 19:34:27.652281       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0725 19:34:27.652336       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kube-apiserver [8282e5e19263ca28a4b44205c5d5f5dfe8cc3b17c20f817f1123667ae629fca7] <==
	I0725 19:31:32.526292       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0725 19:31:32.526301       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0725 19:31:46.560434       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 19:31:46.560514       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:31:46.560529       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 19:32:08.128885       1 client.go:360] parsed scheme: "passthrough"
	I0725 19:32:08.128929       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0725 19:32:08.128937       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0725 19:32:42.457734       1 client.go:360] parsed scheme: "passthrough"
	I0725 19:32:42.457794       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0725 19:32:42.457803       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0725 19:33:17.747452       1 client.go:360] parsed scheme: "passthrough"
	I0725 19:33:17.747504       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0725 19:33:17.747514       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0725 19:33:42.871104       1 handler_proxy.go:102] no RequestInfo found in the context
	E0725 19:33:42.871468       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0725 19:33:42.871630       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0725 19:33:51.746061       1 client.go:360] parsed scheme: "passthrough"
	I0725 19:33:51.746158       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0725 19:33:51.746192       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0725 19:34:21.860139       1 client.go:360] parsed scheme: "passthrough"
	I0725 19:34:21.860187       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0725 19:34:21.860196       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [a78a4cf991c24677e1aee0c57c3053ec9d18ed4f3b954e0b55dbaeb5bc2cd3e8] <==
	I0725 19:26:16.076223       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0725 19:26:16.076259       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0725 19:26:16.084684       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0725 19:26:16.089881       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0725 19:26:16.090009       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0725 19:26:16.613495       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0725 19:26:16.660001       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0725 19:26:16.813366       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0725 19:26:16.814977       1 controller.go:606] quota admission added evaluator for: endpoints
	I0725 19:26:16.825164       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0725 19:26:17.732041       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0725 19:26:18.533401       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0725 19:26:18.611285       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0725 19:26:26.984093       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0725 19:26:34.390642       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0725 19:26:34.627741       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0725 19:26:46.162715       1 client.go:360] parsed scheme: "passthrough"
	I0725 19:26:46.163005       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0725 19:26:46.163059       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0725 19:27:23.306720       1 client.go:360] parsed scheme: "passthrough"
	I0725 19:27:23.306777       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0725 19:27:23.306787       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0725 19:28:03.292393       1 client.go:360] parsed scheme: "passthrough"
	I0725 19:28:03.292455       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0725 19:28:03.292464       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [092cb7f61fd7d0ee33ca01ee05c0526235ab4d055f8a55aebbdc0609b6c057bb] <==
	I0725 19:26:34.569819       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0725 19:26:34.570012       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0725 19:26:34.573743       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0725 19:26:34.581475       1 shared_informer.go:247] Caches are synced for resource quota 
	I0725 19:26:34.594967       1 shared_informer.go:247] Caches are synced for PVC protection 
	I0725 19:26:34.624641       1 shared_informer.go:247] Caches are synced for expand 
	I0725 19:26:34.624641       1 shared_informer.go:247] Caches are synced for stateful set 
	I0725 19:26:34.627689       1 shared_informer.go:247] Caches are synced for persistent volume 
	I0725 19:26:34.629207       1 shared_informer.go:247] Caches are synced for resource quota 
	I0725 19:26:34.651266       1 shared_informer.go:247] Caches are synced for attach detach 
	I0725 19:26:34.662277       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0725 19:26:34.743829       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0725 19:26:34.767820       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-dcw28"
	E0725 19:26:34.801086       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"e597a14f-be32-4fd9-a4b1-85b8ad31516e", ResourceVersion:"281", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63857532379, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240719-e7903573\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40012af880), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40012af8a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x40012af8c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40012af8e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40012af900), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40012af920), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240719-e7903573", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40012af940)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40012af980)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400111c5a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40007153c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000090ee0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000947f18)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000715420)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0725 19:26:34.812668       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-djgf4"
	I0725 19:26:35.044075       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0725 19:26:35.074540       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0725 19:26:35.074560       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0725 19:26:37.259143       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0725 19:26:37.280213       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-dcw28"
	I0725 19:26:39.331772       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0725 19:28:03.964837       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0725 19:28:04.037807       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E0725 19:28:04.043288       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	E0725 19:28:04.070020       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [83992657e3b879a7a80d6d310f516828b9ac2d505791880fb62eabc3fdc281b5] <==
	W0725 19:30:09.595896       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0725 19:30:33.248269       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0725 19:30:41.246367       1 request.go:655] Throttling request took 1.048193082s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0725 19:30:42.097913       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0725 19:31:03.749995       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0725 19:31:13.748525       1 request.go:655] Throttling request took 1.046114447s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0725 19:31:14.600065       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0725 19:31:34.251891       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0725 19:31:46.250482       1 request.go:655] Throttling request took 1.04449563s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0725 19:31:47.103438       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0725 19:32:04.753952       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0725 19:32:18.753869       1 request.go:655] Throttling request took 1.048489881s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0725 19:32:19.605454       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0725 19:32:35.255773       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0725 19:32:51.255958       1 request.go:655] Throttling request took 1.048228753s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0725 19:32:52.107796       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0725 19:33:05.757577       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0725 19:33:23.758342       1 request.go:655] Throttling request took 1.046343956s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W0725 19:33:24.609973       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0725 19:33:36.259638       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0725 19:33:56.260611       1 request.go:655] Throttling request took 1.048029691s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0725 19:33:57.112288       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0725 19:34:06.766541       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0725 19:34:28.762776       1 request.go:655] Throttling request took 1.048268639s, request: GET:https://192.168.85.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0725 19:34:29.614286       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [09fa596a52d128c9994b4384da212aa9349f6711170475aac344eeb7c10c8c98] <==
	I0725 19:28:47.039278       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0725 19:28:47.039571       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0725 19:28:47.076559       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0725 19:28:47.076653       1 server_others.go:185] Using iptables Proxier.
	I0725 19:28:47.076990       1 server.go:650] Version: v1.20.0
	I0725 19:28:47.098913       1 config.go:315] Starting service config controller
	I0725 19:28:47.098993       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0725 19:28:47.099082       1 config.go:224] Starting endpoint slice config controller
	I0725 19:28:47.099092       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0725 19:28:47.200189       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0725 19:28:47.200388       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [5a8e2f746885e19969520172b856da71d30879fb506af5db848f0731b0f8ee1f] <==
	I0725 19:26:36.228977       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I0725 19:26:36.229068       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W0725 19:26:36.364091       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0725 19:26:36.364286       1 server_others.go:185] Using iptables Proxier.
	I0725 19:26:36.367590       1 server.go:650] Version: v1.20.0
	I0725 19:26:36.379296       1 config.go:315] Starting service config controller
	I0725 19:26:36.380071       1 config.go:224] Starting endpoint slice config controller
	I0725 19:26:36.380958       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0725 19:26:36.381144       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0725 19:26:36.481091       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0725 19:26:36.482549       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [2a00adbe91bc43396fb4655279df11d46cadb9750a439b2afc0bd4ab7294bd34] <==
	W0725 19:26:15.245763       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0725 19:26:15.245776       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0725 19:26:15.245781       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0725 19:26:15.301844       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0725 19:26:15.302095       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 19:26:15.302111       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 19:26:15.302248       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0725 19:26:15.344716       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0725 19:26:15.344716       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0725 19:26:15.345027       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0725 19:26:15.345264       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0725 19:26:15.345519       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 19:26:15.347651       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0725 19:26:15.347730       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 19:26:15.348446       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0725 19:26:15.348511       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 19:26:15.348757       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 19:26:15.349339       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0725 19:26:15.351466       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0725 19:26:16.193754       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0725 19:26:16.363249       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0725 19:26:16.417202       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0725 19:26:16.432931       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0725 19:26:16.432937       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0725 19:26:16.702189       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [fac92b7ae7a4bca579e6e42dde322f2efd70be0bd69a7361cbdeceff14266ce1] <==
	I0725 19:28:35.037212       1 serving.go:331] Generated self-signed cert in-memory
	I0725 19:28:42.962518       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0725 19:28:42.962614       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0725 19:28:42.962620       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0725 19:28:42.962632       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0725 19:28:42.979037       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 19:28:42.979069       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0725 19:28:42.979090       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0725 19:28:42.979094       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0725 19:28:43.063085       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
	I0725 19:28:43.083457       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	I0725 19:28:43.083514       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jul 25 19:32:59 old-k8s-version-262689 kubelet[657]: E0725 19:32:59.215146     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 25 19:33:06 old-k8s-version-262689 kubelet[657]: I0725 19:33:06.214417     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8bd318dd74db97a95d70bc0d9930be8040e1c465947f8087a1f41f53a91b5f2b
	Jul 25 19:33:06 old-k8s-version-262689 kubelet[657]: E0725 19:33:06.214773     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	Jul 25 19:33:12 old-k8s-version-262689 kubelet[657]: E0725 19:33:12.219911     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 25 19:33:17 old-k8s-version-262689 kubelet[657]: I0725 19:33:17.214298     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8bd318dd74db97a95d70bc0d9930be8040e1c465947f8087a1f41f53a91b5f2b
	Jul 25 19:33:17 old-k8s-version-262689 kubelet[657]: E0725 19:33:17.215219     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	Jul 25 19:33:25 old-k8s-version-262689 kubelet[657]: E0725 19:33:25.215021     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 25 19:33:32 old-k8s-version-262689 kubelet[657]: I0725 19:33:32.214810     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8bd318dd74db97a95d70bc0d9930be8040e1c465947f8087a1f41f53a91b5f2b
	Jul 25 19:33:32 old-k8s-version-262689 kubelet[657]: E0725 19:33:32.216211     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	Jul 25 19:33:38 old-k8s-version-262689 kubelet[657]: E0725 19:33:38.215054     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 25 19:33:45 old-k8s-version-262689 kubelet[657]: I0725 19:33:45.215611     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8bd318dd74db97a95d70bc0d9930be8040e1c465947f8087a1f41f53a91b5f2b
	Jul 25 19:33:45 old-k8s-version-262689 kubelet[657]: E0725 19:33:45.216141     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	Jul 25 19:33:50 old-k8s-version-262689 kubelet[657]: E0725 19:33:50.215161     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 25 19:33:58 old-k8s-version-262689 kubelet[657]: I0725 19:33:58.214315     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8bd318dd74db97a95d70bc0d9930be8040e1c465947f8087a1f41f53a91b5f2b
	Jul 25 19:33:58 old-k8s-version-262689 kubelet[657]: E0725 19:33:58.214660     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	Jul 25 19:34:03 old-k8s-version-262689 kubelet[657]: E0725 19:34:03.215098     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 25 19:34:11 old-k8s-version-262689 kubelet[657]: I0725 19:34:11.214384     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8bd318dd74db97a95d70bc0d9930be8040e1c465947f8087a1f41f53a91b5f2b
	Jul 25 19:34:11 old-k8s-version-262689 kubelet[657]: E0725 19:34:11.214739     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	Jul 25 19:34:14 old-k8s-version-262689 kubelet[657]: E0725 19:34:14.215104     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 25 19:34:23 old-k8s-version-262689 kubelet[657]: I0725 19:34:23.214346     657 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8bd318dd74db97a95d70bc0d9930be8040e1c465947f8087a1f41f53a91b5f2b
	Jul 25 19:34:23 old-k8s-version-262689 kubelet[657]: E0725 19:34:23.214716     657 pod_workers.go:191] Error syncing pod d35a2687-c50f-4572-81b9-45b07e9c77a7 ("dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-49jgz_kubernetes-dashboard(d35a2687-c50f-4572-81b9-45b07e9c77a7)"
	Jul 25 19:34:27 old-k8s-version-262689 kubelet[657]: E0725 19:34:27.234738     657 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Jul 25 19:34:27 old-k8s-version-262689 kubelet[657]: E0725 19:34:27.234796     657 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Jul 25 19:34:27 old-k8s-version-262689 kubelet[657]: E0725 19:34:27.234979     657 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-wbrxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-8g9gp_kube-system(3897709
f-bdc3-4e64-9947-4512473cf65b): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Jul 25 19:34:27 old-k8s-version-262689 kubelet[657]: E0725 19:34:27.235018     657 pod_workers.go:191] Error syncing pod 3897709f-bdc3-4e64-9947-4512473cf65b ("metrics-server-9975d5f86-8g9gp_kube-system(3897709f-bdc3-4e64-9947-4512473cf65b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	
	
	==> kubernetes-dashboard [22dd93359379075cdee6cc988a33beecf77064ec8b1c1b61b00788a08495e2cc] <==
	2024/07/25 19:29:11 Starting overwatch
	2024/07/25 19:29:11 Using namespace: kubernetes-dashboard
	2024/07/25 19:29:11 Using in-cluster config to connect to apiserver
	2024/07/25 19:29:11 Using secret token for csrf signing
	2024/07/25 19:29:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/25 19:29:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/25 19:29:11 Successful initial request to the apiserver, version: v1.20.0
	2024/07/25 19:29:11 Generating JWE encryption key
	2024/07/25 19:29:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/25 19:29:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/25 19:29:12 Initializing JWE encryption key from synchronized object
	2024/07/25 19:29:12 Creating in-cluster Sidecar client
	2024/07/25 19:29:12 Serving insecurely on HTTP port: 9090
	2024/07/25 19:29:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/25 19:29:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/25 19:30:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/25 19:30:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/25 19:31:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/25 19:31:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/25 19:32:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/25 19:32:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/25 19:33:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/25 19:33:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/25 19:34:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1be6aaee701fc8139b79bc32fca06eada56a5b8869fdc79b4a598c2c2505462e] <==
	I0725 19:29:32.412267       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0725 19:29:32.425429       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0725 19:29:32.425663       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0725 19:29:49.886192       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0725 19:29:49.886594       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-262689_debc9e94-1a68-4548-aa12-7d560a98bc49!
	I0725 19:29:49.887368       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9b2358de-70e0-4b4a-ab15-077bee6a92f1", APIVersion:"v1", ResourceVersion:"836", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-262689_debc9e94-1a68-4548-aa12-7d560a98bc49 became leader
	I0725 19:29:49.986838       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-262689_debc9e94-1a68-4548-aa12-7d560a98bc49!
	
	
	==> storage-provisioner [6ce89219b0ac011ccb222d7354ece78a535ed3974879be78466dbad059580985] <==
	I0725 19:28:46.641964       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0725 19:29:16.664119       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-262689 -n old-k8s-version-262689
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-262689 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-8g9gp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-262689 describe pod metrics-server-9975d5f86-8g9gp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-262689 describe pod metrics-server-9975d5f86-8g9gp: exit status 1 (102.536098ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-8g9gp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-262689 describe pod metrics-server-9975d5f86-8g9gp: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.86s)

                                                
                                    

Test pass (303/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.55
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.30.3/json-events 7.08
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.08
18 TestDownloadOnly/v1.30.3/DeleteAll 0.2
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.31.0-beta.0/json-events 6.52
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.14
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.36
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.26
30 TestBinaryMirror 0.69
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 224.95
40 TestAddons/serial/GCPAuth/Namespaces 0.18
42 TestAddons/parallel/Registry 15.55
43 TestAddons/parallel/Ingress 19.43
44 TestAddons/parallel/InspektorGadget 12.07
45 TestAddons/parallel/MetricsServer 5.83
48 TestAddons/parallel/CSI 73.5
49 TestAddons/parallel/Headlamp 17.96
50 TestAddons/parallel/CloudSpanner 6.63
51 TestAddons/parallel/LocalPath 10.92
52 TestAddons/parallel/NvidiaDevicePlugin 5.66
53 TestAddons/parallel/Yakd 11.9
54 TestAddons/StoppedEnableDisable 12.37
55 TestCertOptions 42.24
56 TestCertExpiration 229.87
58 TestForceSystemdFlag 44.01
59 TestForceSystemdEnv 40.81
60 TestDockerEnvContainerd 50
65 TestErrorSpam/setup 31.02
66 TestErrorSpam/start 0.69
67 TestErrorSpam/status 1
68 TestErrorSpam/pause 1.69
69 TestErrorSpam/unpause 1.8
70 TestErrorSpam/stop 1.49
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 60.52
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 6.36
77 TestFunctional/serial/KubeContext 0.07
78 TestFunctional/serial/KubectlGetPods 0.1
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.6
82 TestFunctional/serial/CacheCmd/cache/add_local 1.47
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
84 TestFunctional/serial/CacheCmd/cache/list 0.06
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
86 TestFunctional/serial/CacheCmd/cache/cache_reload 2.25
87 TestFunctional/serial/CacheCmd/cache/delete 0.11
88 TestFunctional/serial/MinikubeKubectlCmd 0.15
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
90 TestFunctional/serial/ExtraConfig 41.74
91 TestFunctional/serial/ComponentHealth 0.12
92 TestFunctional/serial/LogsCmd 1.72
93 TestFunctional/serial/LogsFileCmd 1.79
94 TestFunctional/serial/InvalidService 4.3
96 TestFunctional/parallel/ConfigCmd 0.46
97 TestFunctional/parallel/DashboardCmd 16.03
98 TestFunctional/parallel/DryRun 0.52
99 TestFunctional/parallel/InternationalLanguage 0.2
100 TestFunctional/parallel/StatusCmd 1.14
104 TestFunctional/parallel/ServiceCmdConnect 10.74
105 TestFunctional/parallel/AddonsCmd 0.19
106 TestFunctional/parallel/PersistentVolumeClaim 25.98
108 TestFunctional/parallel/SSHCmd 0.74
109 TestFunctional/parallel/CpCmd 2.45
111 TestFunctional/parallel/FileSync 0.34
112 TestFunctional/parallel/CertSync 2.03
116 TestFunctional/parallel/NodeLabels 0.12
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
120 TestFunctional/parallel/License 0.25
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.74
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.49
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
127 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
131 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
132 TestFunctional/parallel/ServiceCmd/DeployApp 8.28
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
134 TestFunctional/parallel/ProfileCmd/profile_list 0.41
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
136 TestFunctional/parallel/MountCmd/any-port 7.5
137 TestFunctional/parallel/ServiceCmd/List 0.58
138 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
139 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
140 TestFunctional/parallel/ServiceCmd/Format 0.49
141 TestFunctional/parallel/ServiceCmd/URL 0.43
142 TestFunctional/parallel/MountCmd/specific-port 2.38
143 TestFunctional/parallel/MountCmd/VerifyCleanup 2.73
144 TestFunctional/parallel/Version/short 0.07
145 TestFunctional/parallel/Version/components 1.3
146 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
147 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
148 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
149 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
150 TestFunctional/parallel/ImageCommands/ImageBuild 3.16
151 TestFunctional/parallel/ImageCommands/Setup 0.86
152 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.6
153 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.24
154 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.58
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.74
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
158 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
159 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
160 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
161 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.98
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestMultiControlPlane/serial/StartCluster 128.18
169 TestMultiControlPlane/serial/DeployApp 29.69
170 TestMultiControlPlane/serial/PingHostFromPods 1.74
171 TestMultiControlPlane/serial/AddWorkerNode 24.58
172 TestMultiControlPlane/serial/NodeLabels 0.11
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
174 TestMultiControlPlane/serial/CopyFile 19.66
175 TestMultiControlPlane/serial/StopSecondaryNode 12.96
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
177 TestMultiControlPlane/serial/RestartSecondaryNode 20.06
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 147.05
180 TestMultiControlPlane/serial/DeleteSecondaryNode 11.67
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.55
182 TestMultiControlPlane/serial/StopCluster 36.17
183 TestMultiControlPlane/serial/RestartCluster 77.6
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.56
185 TestMultiControlPlane/serial/AddSecondaryNode 43.63
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
190 TestJSONOutput/start/Command 64.07
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.74
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.67
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.83
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.22
215 TestKicCustomNetwork/create_custom_network 40.61
216 TestKicCustomNetwork/use_default_bridge_network 33.43
217 TestKicExistingNetwork 35.98
218 TestKicCustomSubnet 34.54
219 TestKicStaticIP 35.17
220 TestMainNoArgs 0.05
221 TestMinikubeProfile 70.72
224 TestMountStart/serial/StartWithMountFirst 6.63
225 TestMountStart/serial/VerifyMountFirst 0.27
226 TestMountStart/serial/StartWithMountSecond 6.3
227 TestMountStart/serial/VerifyMountSecond 0.25
228 TestMountStart/serial/DeleteFirst 1.64
229 TestMountStart/serial/VerifyMountPostDelete 0.25
230 TestMountStart/serial/Stop 1.2
231 TestMountStart/serial/RestartStopped 8.1
232 TestMountStart/serial/VerifyMountPostStop 0.26
235 TestMultiNode/serial/FreshStart2Nodes 86.4
236 TestMultiNode/serial/DeployApp2Nodes 20
237 TestMultiNode/serial/PingHostFrom2Pods 0.99
238 TestMultiNode/serial/AddNode 17.59
239 TestMultiNode/serial/MultiNodeLabels 0.1
240 TestMultiNode/serial/ProfileList 0.32
241 TestMultiNode/serial/CopyFile 10.13
242 TestMultiNode/serial/StopNode 2.24
243 TestMultiNode/serial/StartAfterStop 9.93
244 TestMultiNode/serial/RestartKeepsNodes 87.1
245 TestMultiNode/serial/DeleteNode 5.55
246 TestMultiNode/serial/StopMultiNode 24.08
247 TestMultiNode/serial/RestartMultiNode 51.63
248 TestMultiNode/serial/ValidateNameConflict 33.84
253 TestPreload 114.41
255 TestScheduledStopUnix 107.38
258 TestInsufficientStorage 11.4
259 TestRunningBinaryUpgrade 88.03
261 TestKubernetesUpgrade 350.95
262 TestMissingContainerUpgrade 146.01
264 TestPause/serial/Start 72.75
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
267 TestNoKubernetes/serial/StartWithK8s 43.7
268 TestNoKubernetes/serial/StartWithStopK8s 16.67
269 TestNoKubernetes/serial/Start 5.94
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
271 TestNoKubernetes/serial/ProfileList 1.01
272 TestNoKubernetes/serial/Stop 1.27
273 TestNoKubernetes/serial/StartNoArgs 7.27
274 TestPause/serial/SecondStartNoReconfiguration 8.66
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
279 TestPause/serial/Pause 0.91
280 TestPause/serial/VerifyStatus 0.4
285 TestNetworkPlugins/group/false 5.09
286 TestPause/serial/Unpause 0.83
287 TestPause/serial/PauseAgain 1.15
288 TestPause/serial/DeletePaused 3
289 TestPause/serial/VerifyDeletedResources 0.16
293 TestStoppedBinaryUpgrade/Setup 1.07
294 TestStoppedBinaryUpgrade/Upgrade 118.23
302 TestNetworkPlugins/group/auto/Start 82.74
303 TestStoppedBinaryUpgrade/MinikubeLogs 1.69
304 TestNetworkPlugins/group/kindnet/Start 75.04
305 TestNetworkPlugins/group/auto/KubeletFlags 0.38
306 TestNetworkPlugins/group/auto/NetCatPod 10.33
307 TestNetworkPlugins/group/auto/DNS 0.24
308 TestNetworkPlugins/group/auto/Localhost 0.21
309 TestNetworkPlugins/group/auto/HairPin 0.22
310 TestNetworkPlugins/group/calico/Start 80.57
311 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
312 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
313 TestNetworkPlugins/group/kindnet/NetCatPod 10.48
314 TestNetworkPlugins/group/kindnet/DNS 0.19
315 TestNetworkPlugins/group/kindnet/Localhost 0.17
316 TestNetworkPlugins/group/kindnet/HairPin 0.19
317 TestNetworkPlugins/group/custom-flannel/Start 65.28
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/calico/KubeletFlags 0.41
320 TestNetworkPlugins/group/calico/NetCatPod 11.35
321 TestNetworkPlugins/group/calico/DNS 0.19
322 TestNetworkPlugins/group/calico/Localhost 0.16
323 TestNetworkPlugins/group/calico/HairPin 0.19
324 TestNetworkPlugins/group/enable-default-cni/Start 82.48
325 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
326 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.34
327 TestNetworkPlugins/group/custom-flannel/DNS 0.25
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
330 TestNetworkPlugins/group/flannel/Start 61.24
331 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
332 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
333 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
334 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
335 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
336 TestNetworkPlugins/group/flannel/ControllerPod 6.01
337 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
338 TestNetworkPlugins/group/flannel/NetCatPod 11.36
339 TestNetworkPlugins/group/bridge/Start 88.63
340 TestNetworkPlugins/group/flannel/DNS 0.2
341 TestNetworkPlugins/group/flannel/Localhost 0.37
342 TestNetworkPlugins/group/flannel/HairPin 0.21
344 TestStartStop/group/old-k8s-version/serial/FirstStart 147.35
345 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
346 TestNetworkPlugins/group/bridge/NetCatPod 9.28
347 TestNetworkPlugins/group/bridge/DNS 0.34
348 TestNetworkPlugins/group/bridge/Localhost 0.29
349 TestNetworkPlugins/group/bridge/HairPin 0.26
351 TestStartStop/group/no-preload/serial/FirstStart 71.84
352 TestStartStop/group/old-k8s-version/serial/DeployApp 7.54
353 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.24
354 TestStartStop/group/old-k8s-version/serial/Stop 12.14
355 TestStartStop/group/no-preload/serial/DeployApp 9.39
356 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
358 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
359 TestStartStop/group/no-preload/serial/Stop 12.18
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
361 TestStartStop/group/no-preload/serial/SecondStart 272.01
362 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
364 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
365 TestStartStop/group/no-preload/serial/Pause 3.24
367 TestStartStop/group/embed-certs/serial/FirstStart 69.84
368 TestStartStop/group/embed-certs/serial/DeployApp 8.46
369 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
370 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
371 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.11
372 TestStartStop/group/embed-certs/serial/Stop 12.33
373 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
374 TestStartStop/group/old-k8s-version/serial/Pause 2.84
375 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
377 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 67.6
378 TestStartStop/group/embed-certs/serial/SecondStart 270.56
379 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.38
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
381 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
383 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.75
384 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
386 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
387 TestStartStop/group/embed-certs/serial/Pause 3.12
389 TestStartStop/group/newest-cni/serial/FirstStart 37.64
390 TestStartStop/group/newest-cni/serial/DeployApp 0
391 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.28
392 TestStartStop/group/newest-cni/serial/Stop 1.3
393 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
394 TestStartStop/group/newest-cni/serial/SecondStart 17.79
395 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
396 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
397 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
398 TestStartStop/group/newest-cni/serial/Pause 3.71
399 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
400 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
401 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
402 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.03
x
+
TestDownloadOnly/v1.20.0/json-events (9.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-389541 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-389541 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.545582419s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-389541
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-389541: exit status 85 (75.474119ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-389541 | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC |          |
	|         | -p download-only-389541        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:29:34
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:29:34.535507  436900 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:29:34.535763  436900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:29:34.535798  436900 out.go:304] Setting ErrFile to fd 2...
	I0725 18:29:34.535824  436900 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:29:34.536166  436900 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	W0725 18:29:34.536387  436900 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19326-431487/.minikube/config/config.json: open /home/jenkins/minikube-integration/19326-431487/.minikube/config/config.json: no such file or directory
	I0725 18:29:34.536969  436900 out.go:298] Setting JSON to true
	I0725 18:29:34.538089  436900 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7923,"bootTime":1721924251,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0725 18:29:34.538194  436900 start.go:139] virtualization:  
	I0725 18:29:34.542124  436900 out.go:97] [download-only-389541] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0725 18:29:34.542327  436900 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball: no such file or directory
	I0725 18:29:34.542375  436900 notify.go:220] Checking for updates...
	I0725 18:29:34.545003  436900 out.go:169] MINIKUBE_LOCATION=19326
	I0725 18:29:34.547752  436900 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:29:34.550016  436900 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 18:29:34.552572  436900 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	I0725 18:29:34.554801  436900 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0725 18:29:34.559428  436900 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 18:29:34.559752  436900 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:29:34.583023  436900 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0725 18:29:34.583160  436900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 18:29:34.646035  436900 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-25 18:29:34.635920714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 18:29:34.646155  436900 docker.go:307] overlay module found
	I0725 18:29:34.648034  436900 out.go:97] Using the docker driver based on user configuration
	I0725 18:29:34.648065  436900 start.go:297] selected driver: docker
	I0725 18:29:34.648073  436900 start.go:901] validating driver "docker" against <nil>
	I0725 18:29:34.648183  436900 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 18:29:34.704444  436900 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-25 18:29:34.695335807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 18:29:34.704619  436900 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 18:29:34.704922  436900 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0725 18:29:34.705076  436900 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 18:29:34.707663  436900 out.go:169] Using Docker driver with root privileges
	I0725 18:29:34.709690  436900 cni.go:84] Creating CNI manager for ""
	I0725 18:29:34.709712  436900 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0725 18:29:34.709726  436900 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0725 18:29:34.709827  436900 start.go:340] cluster config:
	{Name:download-only-389541 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-389541 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:29:34.712180  436900 out.go:97] Starting "download-only-389541" primary control-plane node in "download-only-389541" cluster
	I0725 18:29:34.712211  436900 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0725 18:29:34.714081  436900 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0725 18:29:34.714117  436900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0725 18:29:34.714273  436900 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0725 18:29:34.729720  436900 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0725 18:29:34.729902  436900 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0725 18:29:34.730002  436900 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0725 18:29:34.786313  436900 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0725 18:29:34.786392  436900 cache.go:56] Caching tarball of preloaded images
	I0725 18:29:34.786594  436900 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0725 18:29:34.788736  436900 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0725 18:29:34.788758  436900 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0725 18:29:34.902595  436900 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-389541 host does not exist
	  To start a cluster, run: "minikube start -p download-only-389541"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-389541
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (7.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-362555 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-362555 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.080083607s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (7.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-362555
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-362555: exit status 85 (76.158865ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-389541 | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC |                     |
	|         | -p download-only-389541        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| delete  | -p download-only-389541        | download-only-389541 | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| start   | -o=json --download-only        | download-only-362555 | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC |                     |
	|         | -p download-only-362555        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:29:44
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:29:44.497504  437118 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:29:44.497731  437118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:29:44.497760  437118 out.go:304] Setting ErrFile to fd 2...
	I0725 18:29:44.497780  437118 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:29:44.498091  437118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	I0725 18:29:44.498547  437118 out.go:298] Setting JSON to true
	I0725 18:29:44.499535  437118 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7933,"bootTime":1721924251,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0725 18:29:44.499639  437118 start.go:139] virtualization:  
	I0725 18:29:44.502291  437118 out.go:97] [download-only-362555] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0725 18:29:44.502507  437118 notify.go:220] Checking for updates...
	I0725 18:29:44.504285  437118 out.go:169] MINIKUBE_LOCATION=19326
	I0725 18:29:44.506184  437118 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:29:44.508216  437118 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 18:29:44.510199  437118 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	I0725 18:29:44.512274  437118 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0725 18:29:44.516186  437118 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 18:29:44.516511  437118 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:29:44.538171  437118 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0725 18:29:44.538283  437118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 18:29:44.608321  437118 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-25 18:29:44.598567611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 18:29:44.608428  437118 docker.go:307] overlay module found
	I0725 18:29:44.611019  437118 out.go:97] Using the docker driver based on user configuration
	I0725 18:29:44.611049  437118 start.go:297] selected driver: docker
	I0725 18:29:44.611065  437118 start.go:901] validating driver "docker" against <nil>
	I0725 18:29:44.611165  437118 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 18:29:44.673461  437118 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-25 18:29:44.663453831 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 18:29:44.673651  437118 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 18:29:44.673992  437118 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0725 18:29:44.674197  437118 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 18:29:44.676427  437118 out.go:169] Using Docker driver with root privileges
	I0725 18:29:44.678137  437118 cni.go:84] Creating CNI manager for ""
	I0725 18:29:44.678157  437118 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0725 18:29:44.678181  437118 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0725 18:29:44.678280  437118 start.go:340] cluster config:
	{Name:download-only-362555 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-362555 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:29:44.680308  437118 out.go:97] Starting "download-only-362555" primary control-plane node in "download-only-362555" cluster
	I0725 18:29:44.680342  437118 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0725 18:29:44.682506  437118 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0725 18:29:44.682538  437118 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0725 18:29:44.682706  437118 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0725 18:29:44.697671  437118 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0725 18:29:44.697822  437118 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0725 18:29:44.697844  437118 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0725 18:29:44.697850  437118 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0725 18:29:44.697861  437118 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0725 18:29:44.742642  437118 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	I0725 18:29:44.742682  437118 cache.go:56] Caching tarball of preloaded images
	I0725 18:29:44.742857  437118 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0725 18:29:44.745103  437118 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0725 18:29:44.745134  437118 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4 ...
	I0725 18:29:44.857749  437118 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:2969442dcdf6412905c6484ccc8dd1ed -> /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-362555 host does not exist
	  To start a cluster, run: "minikube start -p download-only-362555"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-362555
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (6.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-301222 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-301222 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.521733591s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (6.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-301222
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-301222: exit status 85 (136.925285ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-389541 | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC |                     |
	|         | -p download-only-389541             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| delete  | -p download-only-389541             | download-only-389541 | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| start   | -o=json --download-only             | download-only-362555 | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC |                     |
	|         | -p download-only-362555             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| delete  | -p download-only-362555             | download-only-362555 | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC | 25 Jul 24 18:29 UTC |
	| start   | -o=json --download-only             | download-only-301222 | jenkins | v1.33.1 | 25 Jul 24 18:29 UTC |                     |
	|         | -p download-only-301222             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/25 18:29:51
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0725 18:29:51.996366  437328 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:29:51.996508  437328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:29:51.996518  437328 out.go:304] Setting ErrFile to fd 2...
	I0725 18:29:51.996524  437328 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:29:51.996777  437328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	I0725 18:29:51.997177  437328 out.go:298] Setting JSON to true
	I0725 18:29:51.998070  437328 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":7941,"bootTime":1721924251,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0725 18:29:51.998142  437328 start.go:139] virtualization:  
	I0725 18:29:52.000488  437328 out.go:97] [download-only-301222] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0725 18:29:52.000832  437328 notify.go:220] Checking for updates...
	I0725 18:29:52.003424  437328 out.go:169] MINIKUBE_LOCATION=19326
	I0725 18:29:52.005492  437328 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:29:52.011824  437328 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 18:29:52.013783  437328 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	I0725 18:29:52.015636  437328 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0725 18:29:52.019086  437328 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0725 18:29:52.019403  437328 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:29:52.041606  437328 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0725 18:29:52.041713  437328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 18:29:52.102296  437328 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-25 18:29:52.091842717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 18:29:52.102411  437328 docker.go:307] overlay module found
	I0725 18:29:52.104255  437328 out.go:97] Using the docker driver based on user configuration
	I0725 18:29:52.104280  437328 start.go:297] selected driver: docker
	I0725 18:29:52.104286  437328 start.go:901] validating driver "docker" against <nil>
	I0725 18:29:52.104406  437328 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 18:29:52.158360  437328 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-25 18:29:52.148791236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 18:29:52.158531  437328 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0725 18:29:52.158834  437328 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0725 18:29:52.159044  437328 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0725 18:29:52.161303  437328 out.go:169] Using Docker driver with root privileges
	I0725 18:29:52.163317  437328 cni.go:84] Creating CNI manager for ""
	I0725 18:29:52.163346  437328 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0725 18:29:52.163358  437328 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0725 18:29:52.163448  437328 start.go:340] cluster config:
	{Name:download-only-301222 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-301222 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0725 18:29:52.165597  437328 out.go:97] Starting "download-only-301222" primary control-plane node in "download-only-301222" cluster
	I0725 18:29:52.165632  437328 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0725 18:29:52.167411  437328 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0725 18:29:52.167440  437328 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime containerd
	I0725 18:29:52.167618  437328 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0725 18:29:52.183171  437328 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0725 18:29:52.183301  437328 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0725 18:29:52.183324  437328 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0725 18:29:52.183329  437328 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0725 18:29:52.183337  437328 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0725 18:29:52.240728  437328 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I0725 18:29:52.240764  437328 cache.go:56] Caching tarball of preloaded images
	I0725 18:29:52.240929  437328 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime containerd
	I0725 18:29:52.243019  437328 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0725 18:29:52.243049  437328 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-arm64.tar.lz4 ...
	I0725 18:29:52.350505  437328 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:e1550e32e6115d92010b4a739f5f0833 -> /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I0725 18:29:56.869344  437328 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-arm64.tar.lz4 ...
	I0725 18:29:56.869460  437328 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19326-431487/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-301222 host does not exist
	  To start a cluster, run: "minikube start -p download-only-301222"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-301222
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.26s)

                                                
                                    
x
+
TestBinaryMirror (0.69s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-233184 --alsologtostderr --binary-mirror http://127.0.0.1:39159 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-233184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-233184
--- PASS: TestBinaryMirror (0.69s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-673848
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-673848: exit status 85 (75.4994ms)

                                                
                                                
-- stdout --
	* Profile "addons-673848" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-673848"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-673848
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-673848: exit status 85 (73.302357ms)

                                                
                                                
-- stdout --
	* Profile "addons-673848" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-673848"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (224.95s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-673848 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-673848 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m44.945674707s)
--- PASS: TestAddons/Setup (224.95s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-673848 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-673848 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.538708ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-zfs2w" [ad4c3318-d6a0-4edb-802c-b2f86930c67b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.010407993s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bvs6l" [2d1b2ee7-db00-40ff-8469-b85d4ef39cea] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.005035697s
addons_test.go:342: (dbg) Run:  kubectl --context addons-673848 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-673848 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-673848 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.389557228s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 ip
2024/07/25 18:37:40 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.55s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-673848 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-673848 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-673848 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c8bea16b-e6d1-4b8b-9930-8442630b9bd9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c8bea16b-e6d1-4b8b-9930-8442630b9bd9] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004439767s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-673848 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-673848 addons disable ingress-dns --alsologtostderr -v=1: (1.84935254s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-673848 addons disable ingress --alsologtostderr -v=1: (7.837365316s)
--- PASS: TestAddons/parallel/Ingress (19.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.07s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-8wplf" [09601151-f9a5-45fa-98d3-18de784f9cde] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004944031s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-673848
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-673848: (6.064603737s)
--- PASS: TestAddons/parallel/InspektorGadget (12.07s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.832065ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-zbh9n" [cd479902-1630-4c96-9478-bafeaf4649a1] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004786568s
addons_test.go:417: (dbg) Run:  kubectl --context addons-673848 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (73.5s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.494301ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-673848 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-673848 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [9c46c5d0-589e-422b-9854-d96fc3f51c7f] Pending
helpers_test.go:344: "task-pv-pod" [9c46c5d0-589e-422b-9854-d96fc3f51c7f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [9c46c5d0-589e-422b-9854-d96fc3f51c7f] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003787012s
addons_test.go:590: (dbg) Run:  kubectl --context addons-673848 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-673848 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-673848 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-673848 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-673848 delete pod task-pv-pod: (1.377689394s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-673848 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-673848 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-673848 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [5ca016a2-5214-4ab6-b958-fc1151eb1221] Pending
helpers_test.go:344: "task-pv-pod-restore" [5ca016a2-5214-4ab6-b958-fc1151eb1221] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [5ca016a2-5214-4ab6-b958-fc1151eb1221] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003630406s
addons_test.go:632: (dbg) Run:  kubectl --context addons-673848 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-673848 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-673848 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-673848 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.766579585s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (73.50s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-673848 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-673848 --alsologtostderr -v=1: (1.13896279s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-5dzkd" [4df323d1-5329-4f5b-afb7-bfdad58f1c9c] Pending
helpers_test.go:344: "headlamp-7867546754-5dzkd" [4df323d1-5329-4f5b-afb7-bfdad58f1c9c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-5dzkd" [4df323d1-5329-4f5b-afb7-bfdad58f1c9c] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003532438s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-673848 addons disable headlamp --alsologtostderr -v=1: (5.812056526s)
--- PASS: TestAddons/parallel/Headlamp (17.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-ckrzh" [861f431b-16de-4631-95f3-c8052d2dd360] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003949272s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-673848
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.92s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-673848 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-673848 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-673848 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [b582aab6-d914-45ee-859c-c888128898c8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [b582aab6-d914-45ee-859c-c888128898c8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [b582aab6-d914-45ee-859c-c888128898c8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.009679604s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-673848 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 ssh "cat /opt/local-path-provisioner/pvc-1c344bcc-b438-40cd-87f0-3f0edcb610de_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-673848 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-673848 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.92s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-fc76g" [8d3c3a9e-cce4-488b-9311-576d1c2f87f8] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005726644s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-673848
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.66s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-qr7zr" [c9b08eff-9387-474f-bb59-949b88ddd2a6] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003873957s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-673848 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-673848 addons disable yakd --alsologtostderr -v=1: (5.895956802s)
--- PASS: TestAddons/parallel/Yakd (11.90s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-673848
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-673848: (12.098762411s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-673848
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-673848
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-673848
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (42.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-706128 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-706128 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (39.483221221s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-706128 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-706128 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-706128 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-706128" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-706128
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-706128: (2.028075547s)
--- PASS: TestCertOptions (42.24s)

                                                
                                    
x
+
TestCertExpiration (229.87s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-496924 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-496924 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.627067051s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-496924 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-496924 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.94257719s)
helpers_test.go:175: Cleaning up "cert-expiration-496924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-496924
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-496924: (2.302470415s)
--- PASS: TestCertExpiration (229.87s)

                                                
                                    
x
+
TestForceSystemdFlag (44.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-764531 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-764531 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.352420544s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-764531 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-764531" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-764531
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-764531: (2.351904031s)
--- PASS: TestForceSystemdFlag (44.01s)

                                                
                                    
x
+
TestForceSystemdEnv (40.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-406494 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-406494 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.002937762s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-406494 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-406494" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-406494
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-406494: (2.391938621s)
--- PASS: TestForceSystemdEnv (40.81s)

                                                
                                    
x
+
TestDockerEnvContainerd (50s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-385936 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-385936 --driver=docker  --container-runtime=containerd: (33.627745294s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-385936"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-385936": (1.228540021s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-9811QLQYpTXR/agent.456214" SSH_AGENT_PID="456215" DOCKER_HOST=ssh://docker@127.0.0.1:33168 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-9811QLQYpTXR/agent.456214" SSH_AGENT_PID="456215" DOCKER_HOST=ssh://docker@127.0.0.1:33168 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-9811QLQYpTXR/agent.456214" SSH_AGENT_PID="456215" DOCKER_HOST=ssh://docker@127.0.0.1:33168 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.458627529s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-9811QLQYpTXR/agent.456214" SSH_AGENT_PID="456215" DOCKER_HOST=ssh://docker@127.0.0.1:33168 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-9811QLQYpTXR/agent.456214" SSH_AGENT_PID="456215" DOCKER_HOST=ssh://docker@127.0.0.1:33168 docker image ls": (1.025586835s)
helpers_test.go:175: Cleaning up "dockerenv-385936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-385936
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-385936: (1.993529867s)
--- PASS: TestDockerEnvContainerd (50.00s)

                                                
                                    
x
+
TestErrorSpam/setup (31.02s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-260324 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-260324 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-260324 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-260324 --driver=docker  --container-runtime=containerd: (31.023362494s)
--- PASS: TestErrorSpam/setup (31.02s)

                                                
                                    
x
+
TestErrorSpam/start (0.69s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 start --dry-run
--- PASS: TestErrorSpam/start (0.69s)

                                                
                                    
x
+
TestErrorSpam/status (1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 status
--- PASS: TestErrorSpam/status (1.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 stop: (1.298947s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-260324 --log_dir /tmp/nospam-260324 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19326-431487/.minikube/files/etc/test/nested/copy/436893/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.52s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-992537 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-992537 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m0.521016504s)
--- PASS: TestFunctional/serial/StartWithProxy (60.52s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.36s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-992537 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-992537 --alsologtostderr -v=8: (6.35010217s)
functional_test.go:659: soft start took 6.360032619s for "functional-992537" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.36s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-992537 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-992537 cache add registry.k8s.io/pause:3.1: (1.677624093s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-992537 cache add registry.k8s.io/pause:3.3: (1.593599859s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-992537 cache add registry.k8s.io/pause:latest: (1.332203052s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-992537 /tmp/TestFunctionalserialCacheCmdcacheadd_local2612118801/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 cache add minikube-local-cache-test:functional-992537
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 cache delete minikube-local-cache-test:functional-992537
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-992537
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-992537 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (305.297096ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-992537 cache reload: (1.291542488s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 kubectl -- --context functional-992537 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-992537 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.74s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-992537 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-992537 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.73450417s)
functional_test.go:757: restart took 41.734786026s for "functional-992537" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.74s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-992537 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-992537 logs: (1.718135172s)
--- PASS: TestFunctional/serial/LogsCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 logs --file /tmp/TestFunctionalserialLogsFileCmd2378420573/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-992537 logs --file /tmp/TestFunctionalserialLogsFileCmd2378420573/001/logs.txt: (1.78860282s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.3s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-992537 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-992537
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-992537: exit status 115 (444.677946ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30651 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-992537 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.30s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-992537 config get cpus: exit status 14 (68.357896ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-992537 config get cpus: exit status 14 (77.430688ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (16.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-992537 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-992537 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 471581: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (16.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-992537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-992537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (225.114109ms)

                                                
                                                
-- stdout --
	* [functional-992537] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:43:36.980712  471174 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:43:36.980908  471174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:43:36.980915  471174 out.go:304] Setting ErrFile to fd 2...
	I0725 18:43:36.980920  471174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:43:36.981182  471174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	I0725 18:43:36.981640  471174 out.go:298] Setting JSON to false
	I0725 18:43:36.983166  471174 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8766,"bootTime":1721924251,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0725 18:43:36.983282  471174 start.go:139] virtualization:  
	I0725 18:43:36.986261  471174 out.go:177] * [functional-992537] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0725 18:43:36.990048  471174 notify.go:220] Checking for updates...
	I0725 18:43:36.990586  471174 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:43:36.992560  471174 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:43:36.994669  471174 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 18:43:36.999194  471174 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	I0725 18:43:37.001123  471174 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0725 18:43:37.004369  471174 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:43:37.007842  471174 config.go:182] Loaded profile config "functional-992537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0725 18:43:37.008743  471174 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:43:37.049689  471174 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0725 18:43:37.049827  471174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 18:43:37.116979  471174 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-25 18:43:37.106506873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 18:43:37.117096  471174 docker.go:307] overlay module found
	I0725 18:43:37.120842  471174 out.go:177] * Using the docker driver based on existing profile
	I0725 18:43:37.122918  471174 start.go:297] selected driver: docker
	I0725 18:43:37.122981  471174 start.go:901] validating driver "docker" against &{Name:functional-992537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-992537 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:43:37.123093  471174 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:43:37.125651  471174 out.go:177] 
	W0725 18:43:37.127385  471174 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0725 18:43:37.129287  471174 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-992537 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-992537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-992537 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (197.469981ms)

                                                
                                                
-- stdout --
	* [functional-992537] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:43:36.772823  471130 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:43:36.772995  471130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:43:36.773009  471130 out.go:304] Setting ErrFile to fd 2...
	I0725 18:43:36.773015  471130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:43:36.774017  471130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	I0725 18:43:36.774428  471130 out.go:298] Setting JSON to false
	I0725 18:43:36.775505  471130 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":8766,"bootTime":1721924251,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0725 18:43:36.775579  471130 start.go:139] virtualization:  
	I0725 18:43:36.778565  471130 out.go:177] * [functional-992537] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0725 18:43:36.780757  471130 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 18:43:36.780877  471130 notify.go:220] Checking for updates...
	I0725 18:43:36.785145  471130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 18:43:36.786703  471130 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 18:43:36.788565  471130 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	I0725 18:43:36.790222  471130 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0725 18:43:36.792189  471130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 18:43:36.794543  471130 config.go:182] Loaded profile config "functional-992537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0725 18:43:36.795142  471130 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 18:43:36.820518  471130 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0725 18:43:36.820655  471130 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 18:43:36.897937  471130 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-25 18:43:36.887088201 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 18:43:36.898056  471130 docker.go:307] overlay module found
	I0725 18:43:36.901400  471130 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0725 18:43:36.903215  471130 start.go:297] selected driver: docker
	I0725 18:43:36.903254  471130 start.go:901] validating driver "docker" against &{Name:functional-992537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-992537 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0725 18:43:36.903370  471130 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 18:43:36.905787  471130 out.go:177] 
	W0725 18:43:36.907870  471130 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0725 18:43:36.909960  471130 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-992537 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-992537 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-tzklm" [ed58f5d5-f738-41df-80a6-dcca402d4b85] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-tzklm" [ed58f5d5-f738-41df-80a6-dcca402d4b85] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.00434501s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:32133
functional_test.go:1671: http://192.168.49.2:32133: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-tzklm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32133
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [33316bca-b07e-4f61-9129-a6061a57a3c4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004716696s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-992537 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-992537 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-992537 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-992537 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [16f4c487-4179-4d4c-b3ad-9c411c1f9237] Pending
helpers_test.go:344: "sp-pod" [16f4c487-4179-4d4c-b3ad-9c411c1f9237] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [16f4c487-4179-4d4c-b3ad-9c411c1f9237] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00402751s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-992537 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-992537 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-992537 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2ce61b45-3795-42d8-a360-83d8b9ddb25f] Pending
helpers_test.go:344: "sp-pod" [2ce61b45-3795-42d8-a360-83d8b9ddb25f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2ce61b45-3795-42d8-a360-83d8b9ddb25f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004445234s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-992537 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.98s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh -n functional-992537 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 cp functional-992537:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4080702430/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh -n functional-992537 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh -n functional-992537 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/436893/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "sudo cat /etc/test/nested/copy/436893/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/436893.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "sudo cat /etc/ssl/certs/436893.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/436893.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "sudo cat /usr/share/ca-certificates/436893.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/4368932.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "sudo cat /etc/ssl/certs/4368932.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/4368932.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "sudo cat /usr/share/ca-certificates/4368932.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-992537 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-992537 ssh "sudo systemctl is-active docker": exit status 1 (324.723037ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-992537 ssh "sudo systemctl is-active crio": exit status 1 (394.736881ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-992537 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-992537 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-992537 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-992537 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 468780: os: process already finished
helpers_test.go:508: unable to kill pid 468587: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-992537 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-992537 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [080d8d65-49a4-4db0-acb9-f83e4d97b5be] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [080d8d65-49a4-4db0-acb9-f83e4d97b5be] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004717383s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-992537 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.230.251 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-992537 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-992537 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-992537 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-4bhmj" [0419c842-550b-4948-aea6-681448b1f320] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-65f5d5cc78-4bhmj" [0419c842-550b-4948-aea6-681448b1f320] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.00523465s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "338.651107ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "70.543793ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "323.794073ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "54.064537ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-992537 /tmp/TestFunctionalparallelMountCmdany-port3521117935/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721933012759271632" to /tmp/TestFunctionalparallelMountCmdany-port3521117935/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721933012759271632" to /tmp/TestFunctionalparallelMountCmdany-port3521117935/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721933012759271632" to /tmp/TestFunctionalparallelMountCmdany-port3521117935/001/test-1721933012759271632
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-992537 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (341.764913ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 25 18:43 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 25 18:43 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 25 18:43 test-1721933012759271632
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh cat /mount-9p/test-1721933012759271632
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-992537 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [ba98717d-d91a-4342-898b-3567dd48b31d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [ba98717d-d91a-4342-898b-3567dd48b31d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [ba98717d-d91a-4342-898b-3567dd48b31d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.030602664s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-992537 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-992537 /tmp/TestFunctionalparallelMountCmdany-port3521117935/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 service list -o json
functional_test.go:1490: Took "610.707018ms" to run "out/minikube-linux-arm64 -p functional-992537 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:30332
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:30332
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-992537 /tmp/TestFunctionalparallelMountCmdspecific-port1412579301/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-992537 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (515.105747ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-992537 /tmp/TestFunctionalparallelMountCmdspecific-port1412579301/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-992537 ssh "sudo umount -f /mount-9p": exit status 1 (317.739312ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-992537 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-992537 /tmp/TestFunctionalparallelMountCmdspecific-port1412579301/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-992537 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2112640339/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-992537 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2112640339/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-992537 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2112640339/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-992537 ssh "findmnt -T" /mount1: exit status 1 (717.104096ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-992537 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-992537 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2112640339/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-992537 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2112640339/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-992537 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2112640339/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-992537 version -o=json --components: (1.303692912s)
--- PASS: TestFunctional/parallel/Version/components (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-992537 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-992537
docker.io/kindest/kindnetd:v20240719-e7903573
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-992537
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-992537 image ls --format short --alsologtostderr:
I0725 18:43:56.283664  473982 out.go:291] Setting OutFile to fd 1 ...
I0725 18:43:56.283815  473982 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 18:43:56.283826  473982 out.go:304] Setting ErrFile to fd 2...
I0725 18:43:56.283831  473982 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 18:43:56.284089  473982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
I0725 18:43:56.284830  473982 config.go:182] Loaded profile config "functional-992537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0725 18:43:56.284965  473982 config.go:182] Loaded profile config "functional-992537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0725 18:43:56.285450  473982 cli_runner.go:164] Run: docker container inspect functional-992537 --format={{.State.Status}}
I0725 18:43:56.301937  473982 ssh_runner.go:195] Run: systemctl --version
I0725 18:43:56.301994  473982 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992537
I0725 18:43:56.344255  473982 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/functional-992537/id_rsa Username:docker}
I0725 18:43:56.439643  473982 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image ls --format table --alsologtostderr
E0725 18:43:56.860493  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-992537 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                     | alpine             | sha256:d7cd33 | 18.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:014faa | 66.2MB |
| registry.k8s.io/kube-apiserver              | v1.30.3            | sha256:617731 | 29.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/minikube-local-cache-test | functional-992537  | sha256:321045 | 991B   |
| docker.io/library/nginx                     | latest             | sha256:43b17f | 67.6MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-controller-manager     | v1.30.3            | sha256:8e97cd | 28.4MB |
| registry.k8s.io/kube-scheduler              | v1.30.3            | sha256:d48f99 | 17.6MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-992537  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20240715-585640e9 | sha256:5e3296 | 33.3MB |
| registry.k8s.io/kube-proxy                  | v1.30.3            | sha256:2351f5 | 25.6MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| docker.io/kindest/kindnetd                  | v20240719-e7903573 | sha256:f42786 | 33.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-992537 image ls --format table --alsologtostderr:
I0725 18:43:56.913635  474152 out.go:291] Setting OutFile to fd 1 ...
I0725 18:43:56.913904  474152 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 18:43:56.913935  474152 out.go:304] Setting ErrFile to fd 2...
I0725 18:43:56.913956  474152 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 18:43:56.914277  474152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
I0725 18:43:56.915119  474152 config.go:182] Loaded profile config "functional-992537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0725 18:43:56.915301  474152 config.go:182] Loaded profile config "functional-992537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0725 18:43:56.915894  474152 cli_runner.go:164] Run: docker container inspect functional-992537 --format={{.State.Status}}
I0725 18:43:56.938258  474152 ssh_runner.go:195] Run: systemctl --version
I0725 18:43:56.938313  474152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992537
I0725 18:43:56.976185  474152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/functional-992537/id_rsa Username:docker}
I0725 18:43:57.081430  474152 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-992537 image ls --format json --alsologtostderr:
[{"id":"sha256:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"28374500"},{"id":"sha256:2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":["registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"25645955"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-992537"],"size":"2173567"},{"id":"sha256:5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"33290438"},{"
id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"
repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"66189079"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800","repoDigests":["docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"],"repoTags":["docker.io/kindest/kindnetd:v20240719-e7903573"],"size":"33296266"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:3210455161f53c5c3ef11642fc95b
0f177ead4fa6803824b1551f7917e686a06","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-992537"],"size":"991"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c"],"repoTags":["docker.io/library/nginx:latest"],"size":"67647629"},{"id":"sha256:d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"17641143"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTa
gs":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"18253575"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"29942692"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-992537 image ls --format json --alsologtostderr:
I0725 18:43:56.635772  474067 out.go:291] Setting OutFile to fd 1 ...
I0725 18:43:56.635962  474067 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 18:43:56.635989  474067 out.go:304] Setting ErrFile to fd 2...
I0725 18:43:56.636008  474067 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 18:43:56.636270  474067 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
I0725 18:43:56.636962  474067 config.go:182] Loaded profile config "functional-992537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0725 18:43:56.637149  474067 config.go:182] Loaded profile config "functional-992537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0725 18:43:56.637726  474067 cli_runner.go:164] Run: docker container inspect functional-992537 --format={{.State.Status}}
I0725 18:43:56.667072  474067 ssh_runner.go:195] Run: systemctl --version
I0725 18:43:56.667125  474067 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992537
I0725 18:43:56.687392  474067 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/functional-992537/id_rsa Username:docker}
I0725 18:43:56.783433  474067 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-992537 image ls --format yaml --alsologtostderr:
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "29942692"
- id: sha256:2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "25645955"
- id: sha256:d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "17641143"
- id: sha256:5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "33290438"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
repoTags:
- docker.io/library/nginx:alpine
size: "18253575"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "66189079"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-992537
size: "2173567"
- id: sha256:f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800
repoDigests:
- docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a
repoTags:
- docker.io/kindest/kindnetd:v20240719-e7903573
size: "33296266"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:3210455161f53c5c3ef11642fc95b0f177ead4fa6803824b1551f7917e686a06
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-992537
size: "991"
- id: sha256:43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
repoTags:
- docker.io/library/nginx:latest
size: "67647629"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "28374500"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-992537 image ls --format yaml --alsologtostderr:
I0725 18:43:56.343980  473994 out.go:291] Setting OutFile to fd 1 ...
I0725 18:43:56.344214  473994 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 18:43:56.344241  473994 out.go:304] Setting ErrFile to fd 2...
I0725 18:43:56.344260  473994 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 18:43:56.344574  473994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
I0725 18:43:56.345704  473994 config.go:182] Loaded profile config "functional-992537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0725 18:43:56.345900  473994 config.go:182] Loaded profile config "functional-992537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0725 18:43:56.346440  473994 cli_runner.go:164] Run: docker container inspect functional-992537 --format={{.State.Status}}
I0725 18:43:56.368839  473994 ssh_runner.go:195] Run: systemctl --version
I0725 18:43:56.368899  473994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992537
I0725 18:43:56.388653  473994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/functional-992537/id_rsa Username:docker}
I0725 18:43:56.489660  473994 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-992537 ssh pgrep buildkitd: exit status 1 (338.149333ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image build -t localhost/my-image:functional-992537 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-992537 image build -t localhost/my-image:functional-992537 testdata/build --alsologtostderr: (2.596852675s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-992537 image build -t localhost/my-image:functional-992537 testdata/build --alsologtostderr:
I0725 18:43:56.906744  474148 out.go:291] Setting OutFile to fd 1 ...
I0725 18:43:56.907391  474148 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 18:43:56.907410  474148 out.go:304] Setting ErrFile to fd 2...
I0725 18:43:56.907416  474148 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0725 18:43:56.907668  474148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
I0725 18:43:56.908404  474148 config.go:182] Loaded profile config "functional-992537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0725 18:43:56.910547  474148 config.go:182] Loaded profile config "functional-992537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0725 18:43:56.911225  474148 cli_runner.go:164] Run: docker container inspect functional-992537 --format={{.State.Status}}
I0725 18:43:56.931099  474148 ssh_runner.go:195] Run: systemctl --version
I0725 18:43:56.931149  474148 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-992537
I0725 18:43:56.957441  474148 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/functional-992537/id_rsa Username:docker}
I0725 18:43:57.051909  474148 build_images.go:161] Building image from path: /tmp/build.382401766.tar
I0725 18:43:57.051997  474148 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0725 18:43:57.062626  474148 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.382401766.tar
I0725 18:43:57.067566  474148 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.382401766.tar: stat -c "%s %y" /var/lib/minikube/build/build.382401766.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.382401766.tar': No such file or directory
I0725 18:43:57.067600  474148 ssh_runner.go:362] scp /tmp/build.382401766.tar --> /var/lib/minikube/build/build.382401766.tar (3072 bytes)
I0725 18:43:57.097414  474148 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.382401766
I0725 18:43:57.108765  474148 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.382401766 -xf /var/lib/minikube/build/build.382401766.tar
I0725 18:43:57.123890  474148 containerd.go:394] Building image: /var/lib/minikube/build/build.382401766
I0725 18:43:57.123987  474148 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.382401766 --local dockerfile=/var/lib/minikube/build/build.382401766 --output type=image,name=localhost/my-image:functional-992537
#1 [internal] load build definition from Dockerfile
#1 DONE 0.0s

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:eb6c6007067bec43a4dc1e06488f99ed6920bdf19c39f74b9263edbca68759dd
#8 exporting manifest sha256:eb6c6007067bec43a4dc1e06488f99ed6920bdf19c39f74b9263edbca68759dd 0.0s done
#8 exporting config sha256:78c8309e14ad51131df05f1160564c7b8e405f4aa48c2eea8da610a6519d462b 0.0s done
#8 naming to localhost/my-image:functional-992537 done
#8 DONE 0.1s
I0725 18:43:59.393581  474148 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.382401766 --local dockerfile=/var/lib/minikube/build/build.382401766 --output type=image,name=localhost/my-image:functional-992537: (2.269556959s)
I0725 18:43:59.393653  474148 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.382401766
I0725 18:43:59.403332  474148 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.382401766.tar
I0725 18:43:59.412924  474148 build_images.go:217] Built localhost/my-image:functional-992537 from /tmp/build.382401766.tar
I0725 18:43:59.412959  474148 build_images.go:133] succeeded building to: functional-992537
I0725 18:43:59.412965  474148 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
E0725 18:43:46.620180  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 18:43:46.626018  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 18:43:46.636258  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 18:43:46.656516  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 18:43:46.696770  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 18:43:46.777366  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 18:43:46.937698  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 18:43:47.257998  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-992537
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image load --daemon docker.io/kicbase/echo-server:functional-992537 --alsologtostderr
E0725 18:43:47.898175  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-992537 image load --daemon docker.io/kicbase/echo-server:functional-992537 --alsologtostderr: (1.36888763s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image load --daemon docker.io/kicbase/echo-server:functional-992537 --alsologtostderr
E0725 18:43:49.179275  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-992537
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image load --daemon docker.io/kicbase/echo-server:functional-992537 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-992537 image load --daemon docker.io/kicbase/echo-server:functional-992537 --alsologtostderr: (1.058611157s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image ls
E0725 18:43:51.740340  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image save docker.io/kicbase/echo-server:functional-992537 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image rm docker.io/kicbase/echo-server:functional-992537 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
2024/07/25 18:43:53 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-992537
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-992537 image save --daemon docker.io/kicbase/echo-server:functional-992537 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-992537
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-992537
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-992537
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-992537
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (128.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-006583 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0725 18:44:07.100748  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 18:44:27.581112  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 18:45:08.541363  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-006583 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m7.350762689s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (128.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (29.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- rollout status deployment/busybox
E0725 18:46:30.461544  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-006583 -- rollout status deployment/busybox: (26.723941169s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-fg9p7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-fjc6x -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-rpdrz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-fg9p7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-fjc6x -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-rpdrz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-fg9p7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-fjc6x -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-rpdrz -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (29.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-fg9p7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-fg9p7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-fjc6x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-fjc6x -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-rpdrz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-006583 -- exec busybox-fc5497c4f-rpdrz -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-006583 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-006583 -v=7 --alsologtostderr: (23.568294084s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr: (1.015441274s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-006583 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-006583 status --output json -v=7 --alsologtostderr: (1.001181634s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp testdata/cp-test.txt ha-006583:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile576751339/001/cp-test_ha-006583.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583:/home/docker/cp-test.txt ha-006583-m02:/home/docker/cp-test_ha-006583_ha-006583-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m02 "sudo cat /home/docker/cp-test_ha-006583_ha-006583-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583:/home/docker/cp-test.txt ha-006583-m03:/home/docker/cp-test_ha-006583_ha-006583-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m03 "sudo cat /home/docker/cp-test_ha-006583_ha-006583-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583:/home/docker/cp-test.txt ha-006583-m04:/home/docker/cp-test_ha-006583_ha-006583-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m04 "sudo cat /home/docker/cp-test_ha-006583_ha-006583-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp testdata/cp-test.txt ha-006583-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile576751339/001/cp-test_ha-006583-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583-m02:/home/docker/cp-test.txt ha-006583:/home/docker/cp-test_ha-006583-m02_ha-006583.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583 "sudo cat /home/docker/cp-test_ha-006583-m02_ha-006583.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583-m02:/home/docker/cp-test.txt ha-006583-m03:/home/docker/cp-test_ha-006583-m02_ha-006583-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m03 "sudo cat /home/docker/cp-test_ha-006583-m02_ha-006583-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583-m02:/home/docker/cp-test.txt ha-006583-m04:/home/docker/cp-test_ha-006583-m02_ha-006583-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m04 "sudo cat /home/docker/cp-test_ha-006583-m02_ha-006583-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp testdata/cp-test.txt ha-006583-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile576751339/001/cp-test_ha-006583-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583-m03:/home/docker/cp-test.txt ha-006583:/home/docker/cp-test_ha-006583-m03_ha-006583.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583 "sudo cat /home/docker/cp-test_ha-006583-m03_ha-006583.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583-m03:/home/docker/cp-test.txt ha-006583-m02:/home/docker/cp-test_ha-006583-m03_ha-006583-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m02 "sudo cat /home/docker/cp-test_ha-006583-m03_ha-006583-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583-m03:/home/docker/cp-test.txt ha-006583-m04:/home/docker/cp-test_ha-006583-m03_ha-006583-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m04 "sudo cat /home/docker/cp-test_ha-006583-m03_ha-006583-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp testdata/cp-test.txt ha-006583-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile576751339/001/cp-test_ha-006583-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583-m04:/home/docker/cp-test.txt ha-006583:/home/docker/cp-test_ha-006583-m04_ha-006583.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583 "sudo cat /home/docker/cp-test_ha-006583-m04_ha-006583.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583-m04:/home/docker/cp-test.txt ha-006583-m02:/home/docker/cp-test_ha-006583-m04_ha-006583-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m02 "sudo cat /home/docker/cp-test_ha-006583-m04_ha-006583-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 cp ha-006583-m04:/home/docker/cp-test.txt ha-006583-m03:/home/docker/cp-test_ha-006583-m04_ha-006583-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 ssh -n ha-006583-m03 "sudo cat /home/docker/cp-test_ha-006583-m04_ha-006583-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-006583 node stop m02 -v=7 --alsologtostderr: (12.150331481s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr: exit status 7 (808.985002ms)

                                                
                                                
-- stdout --
	ha-006583
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-006583-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-006583-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-006583-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:47:39.986724  490652 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:47:39.987003  490652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:39.987036  490652 out.go:304] Setting ErrFile to fd 2...
	I0725 18:47:39.987055  490652 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:47:39.987330  490652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	I0725 18:47:39.987587  490652 out.go:298] Setting JSON to false
	I0725 18:47:39.987656  490652 mustload.go:65] Loading cluster: ha-006583
	I0725 18:47:39.987761  490652 notify.go:220] Checking for updates...
	I0725 18:47:39.988184  490652 config.go:182] Loaded profile config "ha-006583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0725 18:47:39.988224  490652 status.go:255] checking status of ha-006583 ...
	I0725 18:47:39.989377  490652 cli_runner.go:164] Run: docker container inspect ha-006583 --format={{.State.Status}}
	I0725 18:47:40.052155  490652 status.go:330] ha-006583 host status = "Running" (err=<nil>)
	I0725 18:47:40.052178  490652 host.go:66] Checking if "ha-006583" exists ...
	I0725 18:47:40.052482  490652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-006583
	I0725 18:47:40.084811  490652 host.go:66] Checking if "ha-006583" exists ...
	I0725 18:47:40.085139  490652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 18:47:40.085194  490652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-006583
	I0725 18:47:40.105010  490652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33183 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/ha-006583/id_rsa Username:docker}
	I0725 18:47:40.205553  490652 ssh_runner.go:195] Run: systemctl --version
	I0725 18:47:40.211044  490652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:47:40.225215  490652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 18:47:40.291945  490652 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-07-25 18:47:40.281848376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 18:47:40.292581  490652 kubeconfig.go:125] found "ha-006583" server: "https://192.168.49.254:8443"
	I0725 18:47:40.292620  490652 api_server.go:166] Checking apiserver status ...
	I0725 18:47:40.292666  490652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:47:40.305711  490652 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1584/cgroup
	I0725 18:47:40.316450  490652 api_server.go:182] apiserver freezer: "6:freezer:/docker/95d2abc83f6990b5e6a5768ea0eeec7ca604f8135efbba859a5024ee640f295e/kubepods/burstable/pod8825131e4a2407aa9d950429f47a2c7a/7f7ba5603617239e69a27fb02dc3e75caa8675f7ec06e40248d3ea2bb973034f"
	I0725 18:47:40.316538  490652 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/95d2abc83f6990b5e6a5768ea0eeec7ca604f8135efbba859a5024ee640f295e/kubepods/burstable/pod8825131e4a2407aa9d950429f47a2c7a/7f7ba5603617239e69a27fb02dc3e75caa8675f7ec06e40248d3ea2bb973034f/freezer.state
	I0725 18:47:40.326482  490652 api_server.go:204] freezer state: "THAWED"
	I0725 18:47:40.326529  490652 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0725 18:47:40.335381  490652 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0725 18:47:40.335410  490652 status.go:422] ha-006583 apiserver status = Running (err=<nil>)
	I0725 18:47:40.335422  490652 status.go:257] ha-006583 status: &{Name:ha-006583 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 18:47:40.335448  490652 status.go:255] checking status of ha-006583-m02 ...
	I0725 18:47:40.335772  490652 cli_runner.go:164] Run: docker container inspect ha-006583-m02 --format={{.State.Status}}
	I0725 18:47:40.355312  490652 status.go:330] ha-006583-m02 host status = "Stopped" (err=<nil>)
	I0725 18:47:40.355340  490652 status.go:343] host is not running, skipping remaining checks
	I0725 18:47:40.355349  490652 status.go:257] ha-006583-m02 status: &{Name:ha-006583-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 18:47:40.355370  490652 status.go:255] checking status of ha-006583-m03 ...
	I0725 18:47:40.355709  490652 cli_runner.go:164] Run: docker container inspect ha-006583-m03 --format={{.State.Status}}
	I0725 18:47:40.373582  490652 status.go:330] ha-006583-m03 host status = "Running" (err=<nil>)
	I0725 18:47:40.373611  490652 host.go:66] Checking if "ha-006583-m03" exists ...
	I0725 18:47:40.373913  490652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-006583-m03
	I0725 18:47:40.393429  490652 host.go:66] Checking if "ha-006583-m03" exists ...
	I0725 18:47:40.393742  490652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 18:47:40.393791  490652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-006583-m03
	I0725 18:47:40.411262  490652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33193 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/ha-006583-m03/id_rsa Username:docker}
	I0725 18:47:40.504501  490652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:47:40.529865  490652 kubeconfig.go:125] found "ha-006583" server: "https://192.168.49.254:8443"
	I0725 18:47:40.529900  490652 api_server.go:166] Checking apiserver status ...
	I0725 18:47:40.529945  490652 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 18:47:40.545672  490652 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1441/cgroup
	I0725 18:47:40.555602  490652 api_server.go:182] apiserver freezer: "6:freezer:/docker/40c1a8e43307db6b92e7ed5c634cc8a8d23294ff95d40522fa4faa32a19d5633/kubepods/burstable/pod1da839872d9eae38ee916b893603d1d8/695d0436ffdb5f73b5acd3d8eb25c03cca346ec069e30b1e122937e4c0c683dd"
	I0725 18:47:40.555701  490652 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/40c1a8e43307db6b92e7ed5c634cc8a8d23294ff95d40522fa4faa32a19d5633/kubepods/burstable/pod1da839872d9eae38ee916b893603d1d8/695d0436ffdb5f73b5acd3d8eb25c03cca346ec069e30b1e122937e4c0c683dd/freezer.state
	I0725 18:47:40.565980  490652 api_server.go:204] freezer state: "THAWED"
	I0725 18:47:40.566012  490652 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0725 18:47:40.573899  490652 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0725 18:47:40.573927  490652 status.go:422] ha-006583-m03 apiserver status = Running (err=<nil>)
	I0725 18:47:40.573938  490652 status.go:257] ha-006583-m03 status: &{Name:ha-006583-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 18:47:40.573980  490652 status.go:255] checking status of ha-006583-m04 ...
	I0725 18:47:40.574325  490652 cli_runner.go:164] Run: docker container inspect ha-006583-m04 --format={{.State.Status}}
	I0725 18:47:40.592942  490652 status.go:330] ha-006583-m04 host status = "Running" (err=<nil>)
	I0725 18:47:40.592970  490652 host.go:66] Checking if "ha-006583-m04" exists ...
	I0725 18:47:40.593354  490652 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-006583-m04
	I0725 18:47:40.612925  490652 host.go:66] Checking if "ha-006583-m04" exists ...
	I0725 18:47:40.615218  490652 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 18:47:40.615281  490652 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-006583-m04
	I0725 18:47:40.633393  490652 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33199 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/ha-006583-m04/id_rsa Username:docker}
	I0725 18:47:40.728185  490652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 18:47:40.740353  490652 status.go:257] ha-006583-m04 status: &{Name:ha-006583-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-006583 node start m02 -v=7 --alsologtostderr: (18.195975523s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr: (1.718852923s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.019659802s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (147.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-006583 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-006583 -v=7 --alsologtostderr
E0725 18:48:05.090382  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:05.096824  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:05.107095  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:05.128016  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:05.168362  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:05.249444  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:05.409673  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:05.729786  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:06.370900  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:07.651268  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:10.211958  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:15.332144  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:25.573333  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-006583 -v=7 --alsologtostderr: (37.343661589s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-006583 --wait=true -v=7 --alsologtostderr
E0725 18:48:46.053710  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:48:46.618629  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 18:49:14.302260  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 18:49:27.014972  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-006583 --wait=true -v=7 --alsologtostderr: (1m49.52870221s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-006583
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (147.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-006583 node delete m03 -v=7 --alsologtostderr: (10.710836587s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 stop -v=7 --alsologtostderr
E0725 18:50:48.936398  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-006583 stop -v=7 --alsologtostderr: (36.066129264s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr: exit status 7 (106.094348ms)

                                                
                                                
-- stdout --
	ha-006583
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-006583-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-006583-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 18:51:17.760702  505083 out.go:291] Setting OutFile to fd 1 ...
	I0725 18:51:17.760842  505083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:51:17.760852  505083 out.go:304] Setting ErrFile to fd 2...
	I0725 18:51:17.760857  505083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 18:51:17.761108  505083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	I0725 18:51:17.761324  505083 out.go:298] Setting JSON to false
	I0725 18:51:17.761369  505083 mustload.go:65] Loading cluster: ha-006583
	I0725 18:51:17.761452  505083 notify.go:220] Checking for updates...
	I0725 18:51:17.761779  505083 config.go:182] Loaded profile config "ha-006583": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0725 18:51:17.761799  505083 status.go:255] checking status of ha-006583 ...
	I0725 18:51:17.762292  505083 cli_runner.go:164] Run: docker container inspect ha-006583 --format={{.State.Status}}
	I0725 18:51:17.781062  505083 status.go:330] ha-006583 host status = "Stopped" (err=<nil>)
	I0725 18:51:17.781086  505083 status.go:343] host is not running, skipping remaining checks
	I0725 18:51:17.781094  505083 status.go:257] ha-006583 status: &{Name:ha-006583 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 18:51:17.781126  505083 status.go:255] checking status of ha-006583-m02 ...
	I0725 18:51:17.781418  505083 cli_runner.go:164] Run: docker container inspect ha-006583-m02 --format={{.State.Status}}
	I0725 18:51:17.797313  505083 status.go:330] ha-006583-m02 host status = "Stopped" (err=<nil>)
	I0725 18:51:17.797338  505083 status.go:343] host is not running, skipping remaining checks
	I0725 18:51:17.797346  505083 status.go:257] ha-006583-m02 status: &{Name:ha-006583-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 18:51:17.797371  505083 status.go:255] checking status of ha-006583-m04 ...
	I0725 18:51:17.797696  505083 cli_runner.go:164] Run: docker container inspect ha-006583-m04 --format={{.State.Status}}
	I0725 18:51:17.820299  505083 status.go:330] ha-006583-m04 host status = "Stopped" (err=<nil>)
	I0725 18:51:17.820327  505083 status.go:343] host is not running, skipping remaining checks
	I0725 18:51:17.820335  505083 status.go:257] ha-006583-m04 status: &{Name:ha-006583-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (77.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-006583 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-006583 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.608519101s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (77.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-006583 --control-plane -v=7 --alsologtostderr
E0725 18:53:05.089932  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-006583 --control-plane -v=7 --alsologtostderr: (42.624655505s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-006583 status -v=7 --alsologtostderr: (1.001942243s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (64.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-999148 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0725 18:53:32.776684  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 18:53:46.618973  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-999148 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m4.061173312s)
--- PASS: TestJSONOutput/start/Command (64.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-999148 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-999148 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.83s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-999148 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-999148 --output=json --user=testUser: (5.827999166s)
--- PASS: TestJSONOutput/stop/Command (5.83s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-355712 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-355712 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.041476ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"08b378c3-9c3f-449b-b64a-10f1798a9b0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-355712] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a7f47cb-ef65-4e1d-98fe-f1458a6ee2fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19326"}}
	{"specversion":"1.0","id":"cf6fac86-eb95-4da2-9353-89edc075bbf0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2746c0d1-6c6f-4b82-b91f-b85fb4b78bdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig"}}
	{"specversion":"1.0","id":"9a632549-5f68-4e14-a1a7-7b3cfb512020","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube"}}
	{"specversion":"1.0","id":"09430f12-c785-4b80-93e9-7af3fbf575ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"8ca78f6b-d11a-4592-bc98-883e9756c5fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e5aeb42e-5204-4e94-93a8-01f0c3f2d4da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-355712" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-355712
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.61s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-745017 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-745017 --network=: (38.532792593s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-745017" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-745017
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-745017: (2.048958867s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.61s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-166482 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-166482 --network=bridge: (31.454587567s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-166482" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-166482
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-166482: (1.945912776s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.43s)

                                                
                                    
x
+
TestKicExistingNetwork (35.98s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-892898 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-892898 --network=existing-network: (33.857763057s)
helpers_test.go:175: Cleaning up "existing-network-892898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-892898
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-892898: (1.967865059s)
--- PASS: TestKicExistingNetwork (35.98s)

                                                
                                    
x
+
TestKicCustomSubnet (34.54s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-003634 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-003634 --subnet=192.168.60.0/24: (32.44304551s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-003634 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-003634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-003634
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-003634: (2.080339977s)
--- PASS: TestKicCustomSubnet (34.54s)

                                                
                                    
x
+
TestKicStaticIP (35.17s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-379833 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-379833 --static-ip=192.168.200.200: (32.893821827s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-379833 ip
helpers_test.go:175: Cleaning up "static-ip-379833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-379833
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-379833: (2.13265402s)
--- PASS: TestKicStaticIP (35.17s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.72s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-996446 --driver=docker  --container-runtime=containerd
E0725 18:58:05.090343  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-996446 --driver=docker  --container-runtime=containerd: (33.748719872s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-999416 --driver=docker  --container-runtime=containerd
E0725 18:58:46.619064  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-999416 --driver=docker  --container-runtime=containerd: (31.468124652s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-996446
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-999416
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-999416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-999416
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-999416: (1.970463502s)
helpers_test.go:175: Cleaning up "first-996446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-996446
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-996446: (2.268578335s)
--- PASS: TestMinikubeProfile (70.72s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-227018 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-227018 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.626607796s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-227018 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.3s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-239859 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-239859 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.294810843s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-239859 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-227018 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-227018 --alsologtostderr -v=5: (1.636589887s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-239859 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-239859
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-239859: (1.204029951s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.1s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-239859
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-239859: (7.102085847s)
--- PASS: TestMountStart/serial/RestartStopped (8.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-239859 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (86.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-793678 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0725 19:00:09.663067  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-793678 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m25.866496508s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (86.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (20s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-793678 -- rollout status deployment/busybox: (18.062577669s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- exec busybox-fc5497c4f-dswsl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- exec busybox-fc5497c4f-qjbc9 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- exec busybox-fc5497c4f-dswsl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- exec busybox-fc5497c4f-qjbc9 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- exec busybox-fc5497c4f-dswsl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- exec busybox-fc5497c4f-qjbc9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (20.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- exec busybox-fc5497c4f-dswsl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- exec busybox-fc5497c4f-dswsl -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- exec busybox-fc5497c4f-qjbc9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-793678 -- exec busybox-fc5497c4f-qjbc9 -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-793678 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-793678 -v 3 --alsologtostderr: (16.903946255s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.59s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-793678 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 cp testdata/cp-test.txt multinode-793678:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 cp multinode-793678:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile422900635/001/cp-test_multinode-793678.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 cp multinode-793678:/home/docker/cp-test.txt multinode-793678-m02:/home/docker/cp-test_multinode-793678_multinode-793678-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678-m02 "sudo cat /home/docker/cp-test_multinode-793678_multinode-793678-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 cp multinode-793678:/home/docker/cp-test.txt multinode-793678-m03:/home/docker/cp-test_multinode-793678_multinode-793678-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678-m03 "sudo cat /home/docker/cp-test_multinode-793678_multinode-793678-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 cp testdata/cp-test.txt multinode-793678-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 cp multinode-793678-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile422900635/001/cp-test_multinode-793678-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 cp multinode-793678-m02:/home/docker/cp-test.txt multinode-793678:/home/docker/cp-test_multinode-793678-m02_multinode-793678.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678 "sudo cat /home/docker/cp-test_multinode-793678-m02_multinode-793678.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 cp multinode-793678-m02:/home/docker/cp-test.txt multinode-793678-m03:/home/docker/cp-test_multinode-793678-m02_multinode-793678-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678-m03 "sudo cat /home/docker/cp-test_multinode-793678-m02_multinode-793678-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 cp testdata/cp-test.txt multinode-793678-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 cp multinode-793678-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile422900635/001/cp-test_multinode-793678-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 cp multinode-793678-m03:/home/docker/cp-test.txt multinode-793678:/home/docker/cp-test_multinode-793678-m03_multinode-793678.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678 "sudo cat /home/docker/cp-test_multinode-793678-m03_multinode-793678.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 cp multinode-793678-m03:/home/docker/cp-test.txt multinode-793678-m02:/home/docker/cp-test_multinode-793678-m03_multinode-793678-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 ssh -n multinode-793678-m02 "sudo cat /home/docker/cp-test_multinode-793678-m03_multinode-793678-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-793678 node stop m03: (1.209420484s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-793678 status: exit status 7 (521.760627ms)

                                                
                                                
-- stdout --
	multinode-793678
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-793678-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-793678-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-793678 status --alsologtostderr: exit status 7 (507.346875ms)

                                                
                                                
-- stdout --
	multinode-793678
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-793678-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-793678-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 19:01:39.430346  559340 out.go:291] Setting OutFile to fd 1 ...
	I0725 19:01:39.430543  559340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:01:39.430556  559340 out.go:304] Setting ErrFile to fd 2...
	I0725 19:01:39.430562  559340 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:01:39.430844  559340 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	I0725 19:01:39.431099  559340 out.go:298] Setting JSON to false
	I0725 19:01:39.431163  559340 mustload.go:65] Loading cluster: multinode-793678
	I0725 19:01:39.431212  559340 notify.go:220] Checking for updates...
	I0725 19:01:39.431609  559340 config.go:182] Loaded profile config "multinode-793678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0725 19:01:39.431628  559340 status.go:255] checking status of multinode-793678 ...
	I0725 19:01:39.432478  559340 cli_runner.go:164] Run: docker container inspect multinode-793678 --format={{.State.Status}}
	I0725 19:01:39.451053  559340 status.go:330] multinode-793678 host status = "Running" (err=<nil>)
	I0725 19:01:39.451075  559340 host.go:66] Checking if "multinode-793678" exists ...
	I0725 19:01:39.451429  559340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-793678
	I0725 19:01:39.478858  559340 host.go:66] Checking if "multinode-793678" exists ...
	I0725 19:01:39.479191  559340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 19:01:39.479249  559340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-793678
	I0725 19:01:39.498375  559340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33304 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/multinode-793678/id_rsa Username:docker}
	I0725 19:01:39.592534  559340 ssh_runner.go:195] Run: systemctl --version
	I0725 19:01:39.596928  559340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 19:01:39.609403  559340 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 19:01:39.673649  559340 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-07-25 19:01:39.655346364 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 19:01:39.674269  559340 kubeconfig.go:125] found "multinode-793678" server: "https://192.168.58.2:8443"
	I0725 19:01:39.674306  559340 api_server.go:166] Checking apiserver status ...
	I0725 19:01:39.674350  559340 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0725 19:01:39.685773  559340 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1461/cgroup
	I0725 19:01:39.695454  559340 api_server.go:182] apiserver freezer: "6:freezer:/docker/7a0a1420b64e9149183f6c62221a504a8fd35e71cbb04ef1f92c625c45eeb9b6/kubepods/burstable/poda66529bbf40fbcdff36ef1be2771179d/1b15de11794b94d612cf4e094720ba75332ab32b7d818571269ce1c963f49717"
	I0725 19:01:39.695540  559340 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7a0a1420b64e9149183f6c62221a504a8fd35e71cbb04ef1f92c625c45eeb9b6/kubepods/burstable/poda66529bbf40fbcdff36ef1be2771179d/1b15de11794b94d612cf4e094720ba75332ab32b7d818571269ce1c963f49717/freezer.state
	I0725 19:01:39.704341  559340 api_server.go:204] freezer state: "THAWED"
	I0725 19:01:39.704380  559340 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0725 19:01:39.712164  559340 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0725 19:01:39.712191  559340 status.go:422] multinode-793678 apiserver status = Running (err=<nil>)
	I0725 19:01:39.712202  559340 status.go:257] multinode-793678 status: &{Name:multinode-793678 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 19:01:39.712220  559340 status.go:255] checking status of multinode-793678-m02 ...
	I0725 19:01:39.712524  559340 cli_runner.go:164] Run: docker container inspect multinode-793678-m02 --format={{.State.Status}}
	I0725 19:01:39.730172  559340 status.go:330] multinode-793678-m02 host status = "Running" (err=<nil>)
	I0725 19:01:39.730200  559340 host.go:66] Checking if "multinode-793678-m02" exists ...
	I0725 19:01:39.730492  559340 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-793678-m02
	I0725 19:01:39.746747  559340 host.go:66] Checking if "multinode-793678-m02" exists ...
	I0725 19:01:39.747162  559340 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0725 19:01:39.747209  559340 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-793678-m02
	I0725 19:01:39.763839  559340 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33309 SSHKeyPath:/home/jenkins/minikube-integration/19326-431487/.minikube/machines/multinode-793678-m02/id_rsa Username:docker}
	I0725 19:01:39.856721  559340 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0725 19:01:39.868178  559340 status.go:257] multinode-793678-m02 status: &{Name:multinode-793678-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0725 19:01:39.868210  559340 status.go:255] checking status of multinode-793678-m03 ...
	I0725 19:01:39.868504  559340 cli_runner.go:164] Run: docker container inspect multinode-793678-m03 --format={{.State.Status}}
	I0725 19:01:39.884944  559340 status.go:330] multinode-793678-m03 host status = "Stopped" (err=<nil>)
	I0725 19:01:39.884968  559340 status.go:343] host is not running, skipping remaining checks
	I0725 19:01:39.884976  559340 status.go:257] multinode-793678-m03 status: &{Name:multinode-793678-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-793678 node start m03 -v=7 --alsologtostderr: (9.179928858s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (87.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-793678
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-793678
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-793678: (25.065005156s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-793678 --wait=true -v=8 --alsologtostderr
E0725 19:03:05.090399  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-793678 --wait=true -v=8 --alsologtostderr: (1m1.915758124s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-793678
--- PASS: TestMultiNode/serial/RestartKeepsNodes (87.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-793678 node delete m03: (4.813423711s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-793678 stop: (23.896256082s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-793678 status: exit status 7 (94.879095ms)

                                                
                                                
-- stdout --
	multinode-793678
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-793678-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-793678 status --alsologtostderr: exit status 7 (86.892481ms)

                                                
                                                
-- stdout --
	multinode-793678
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-793678-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 19:03:46.504197  567354 out.go:291] Setting OutFile to fd 1 ...
	I0725 19:03:46.504372  567354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:03:46.504402  567354 out.go:304] Setting ErrFile to fd 2...
	I0725 19:03:46.504422  567354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:03:46.504693  567354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	I0725 19:03:46.504904  567354 out.go:298] Setting JSON to false
	I0725 19:03:46.504980  567354 mustload.go:65] Loading cluster: multinode-793678
	I0725 19:03:46.505079  567354 notify.go:220] Checking for updates...
	I0725 19:03:46.505461  567354 config.go:182] Loaded profile config "multinode-793678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0725 19:03:46.505475  567354 status.go:255] checking status of multinode-793678 ...
	I0725 19:03:46.506027  567354 cli_runner.go:164] Run: docker container inspect multinode-793678 --format={{.State.Status}}
	I0725 19:03:46.524322  567354 status.go:330] multinode-793678 host status = "Stopped" (err=<nil>)
	I0725 19:03:46.524348  567354 status.go:343] host is not running, skipping remaining checks
	I0725 19:03:46.524357  567354 status.go:257] multinode-793678 status: &{Name:multinode-793678 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0725 19:03:46.524400  567354 status.go:255] checking status of multinode-793678-m02 ...
	I0725 19:03:46.524702  567354 cli_runner.go:164] Run: docker container inspect multinode-793678-m02 --format={{.State.Status}}
	I0725 19:03:46.541845  567354 status.go:330] multinode-793678-m02 host status = "Stopped" (err=<nil>)
	I0725 19:03:46.541870  567354 status.go:343] host is not running, skipping remaining checks
	I0725 19:03:46.541889  567354 status.go:257] multinode-793678-m02 status: &{Name:multinode-793678-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-793678 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0725 19:03:46.621137  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 19:04:28.137384  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-793678 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (50.948849099s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-793678 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.63s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-793678
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-793678-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-793678-m02 --driver=docker  --container-runtime=containerd: exit status 14 (77.426704ms)

                                                
                                                
-- stdout --
	* [multinode-793678-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-793678-m02' is duplicated with machine name 'multinode-793678-m02' in profile 'multinode-793678'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-793678-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-793678-m03 --driver=docker  --container-runtime=containerd: (31.24119782s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-793678
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-793678: exit status 80 (331.070057ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-793678 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-793678-m03 already exists in multinode-793678-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-793678-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-793678-m03: (2.136366578s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.84s)

                                                
                                    
x
+
TestPreload (114.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-446776 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-446776 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m16.477040442s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-446776 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-446776 image pull gcr.io/k8s-minikube/busybox: (1.225341906s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-446776
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-446776: (12.060276308s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-446776 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-446776 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.696904841s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-446776 image list
helpers_test.go:175: Cleaning up "test-preload-446776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-446776
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-446776: (2.616774773s)
--- PASS: TestPreload (114.41s)

                                                
                                    
x
+
TestScheduledStopUnix (107.38s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-643342 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-643342 --memory=2048 --driver=docker  --container-runtime=containerd: (30.318283989s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-643342 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-643342 -n scheduled-stop-643342
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-643342 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-643342 --cancel-scheduled
E0725 19:08:05.090435  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-643342 -n scheduled-stop-643342
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-643342
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-643342 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0725 19:08:46.619010  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-643342
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-643342: exit status 7 (63.764772ms)

                                                
                                                
-- stdout --
	scheduled-stop-643342
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-643342 -n scheduled-stop-643342
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-643342 -n scheduled-stop-643342: exit status 7 (67.05825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-643342" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-643342
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-643342: (5.529817917s)
--- PASS: TestScheduledStopUnix (107.38s)

                                                
                                    
x
+
TestInsufficientStorage (11.4s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-301987 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-301987 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.946682811s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7fcb2b0e-011a-44c1-8c2e-6024a5b1c925","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-301987] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d48dd468-7a21-4fe2-83cd-0a7f10031788","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19326"}}
	{"specversion":"1.0","id":"528e2e92-3242-4333-8e3e-9383b06eca0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f7a74bdb-cb46-4d66-b842-0bd9bfcafb71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig"}}
	{"specversion":"1.0","id":"ec29a97b-84b0-4c95-b048-28ab82c4d349","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube"}}
	{"specversion":"1.0","id":"5077bc99-654e-4d93-8842-7df8b4617109","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9b41e59e-54be-43ee-8907-9a8be7cc93f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e4af1269-d3d4-4f66-ba3b-7d1b9e8ac920","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"d952b83a-a66d-459e-b549-430e39a0dd1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f56fc3ec-a490-460a-8847-ff910b5c8203","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c6d4da4e-41f0-4025-89b7-51503d4ad835","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"1fae792a-6eaf-43b8-a0f3-c254816ccee0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-301987\" primary control-plane node in \"insufficient-storage-301987\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"665aa2b6-c24c-4501-b057-61014d9368ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721902582-19326 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"85e1d058-8280-4ec1-a6b7-265f4a9ec2c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"536c3c96-32a3-440a-99cf-65311c385c93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-301987 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-301987 --output=json --layout=cluster: exit status 7 (280.961488ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-301987","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-301987","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 19:09:07.091786  585870 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-301987" does not appear in /home/jenkins/minikube-integration/19326-431487/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-301987 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-301987 --output=json --layout=cluster: exit status 7 (285.647753ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-301987","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-301987","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0725 19:09:07.375785  585931 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-301987" does not appear in /home/jenkins/minikube-integration/19326-431487/kubeconfig
	E0725 19:09:07.385788  585931 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/insufficient-storage-301987/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-301987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-301987
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-301987: (1.885466645s)
--- PASS: TestInsufficientStorage (11.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (88.03s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2424796914 start -p running-upgrade-143041 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2424796914 start -p running-upgrade-143041 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (49.728206512s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-143041 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0725 19:18:46.619466  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-143041 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.033823248s)
helpers_test.go:175: Cleaning up "running-upgrade-143041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-143041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-143041: (3.293153934s)
--- PASS: TestRunningBinaryUpgrade (88.03s)

                                                
                                    
x
+
TestKubernetesUpgrade (350.95s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-164987 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0725 19:13:05.090081  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-164987 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.528930607s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-164987
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-164987: (1.231032826s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-164987 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-164987 status --format={{.Host}}: exit status 7 (70.662678ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-164987 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0725 19:13:46.618975  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-164987 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m40.151630037s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-164987 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-164987 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-164987 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (80.292292ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-164987] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-164987
	    minikube start -p kubernetes-upgrade-164987 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1649872 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-164987 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-164987 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-164987 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.544422613s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-164987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-164987
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-164987: (2.25421916s)
--- PASS: TestKubernetesUpgrade (350.95s)

                                                
                                    
x
+
TestMissingContainerUpgrade (146.01s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2616673412 start -p missing-upgrade-610607 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2616673412 start -p missing-upgrade-610607 --memory=2200 --driver=docker  --container-runtime=containerd: (1m17.658344795s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-610607
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-610607
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-610607 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0725 19:16:49.663985  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-610607 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m3.592501201s)
helpers_test.go:175: Cleaning up "missing-upgrade-610607" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-610607
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-610607: (2.359646126s)
--- PASS: TestMissingContainerUpgrade (146.01s)

                                                
                                    
x
+
TestPause/serial/Start (72.75s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-307494 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-307494 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m12.751426834s)
--- PASS: TestPause/serial/Start (72.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-526990 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-526990 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (86.254399ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-526990] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-526990 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-526990 --driver=docker  --container-runtime=containerd: (43.314956832s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-526990 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-526990 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-526990 --no-kubernetes --driver=docker  --container-runtime=containerd: (14.345706625s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-526990 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-526990 status -o json: exit status 2 (307.76418ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-526990","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-526990
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-526990: (2.018383267s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-526990 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-526990 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.943996244s)
--- PASS: TestNoKubernetes/serial/Start (5.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-526990 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-526990 "sudo systemctl is-active --quiet service kubelet": exit status 1 (307.478121ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-526990
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-526990: (1.268694616s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-526990 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-526990 --driver=docker  --container-runtime=containerd: (7.26899664s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.66s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-307494 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-307494 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.64640937s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-526990 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-526990 "sudo systemctl is-active --quiet service kubelet": exit status 1 (307.420296ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestPause/serial/Pause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-307494 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.91s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-307494 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-307494 --output=json --layout=cluster: exit status 2 (399.215672ms)

                                                
                                                
-- stdout --
	{"Name":"pause-307494","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-307494","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-212266 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-212266 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (256.227377ms)

                                                
                                                
-- stdout --
	* [false-212266] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19326
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0725 19:10:32.054582  596949 out.go:291] Setting OutFile to fd 1 ...
	I0725 19:10:32.054836  596949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:10:32.054868  596949 out.go:304] Setting ErrFile to fd 2...
	I0725 19:10:32.054886  596949 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0725 19:10:32.055216  596949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19326-431487/.minikube/bin
	I0725 19:10:32.055727  596949 out.go:298] Setting JSON to false
	I0725 19:10:32.056829  596949 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":10381,"bootTime":1721924251,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0725 19:10:32.056944  596949 start.go:139] virtualization:  
	I0725 19:10:32.059737  596949 out.go:177] * [false-212266] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0725 19:10:32.062250  596949 out.go:177]   - MINIKUBE_LOCATION=19326
	I0725 19:10:32.062330  596949 notify.go:220] Checking for updates...
	I0725 19:10:32.066416  596949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0725 19:10:32.068758  596949 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19326-431487/kubeconfig
	I0725 19:10:32.070849  596949 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19326-431487/.minikube
	I0725 19:10:32.072836  596949 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0725 19:10:32.076584  596949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0725 19:10:32.078765  596949 config.go:182] Loaded profile config "pause-307494": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0725 19:10:32.078870  596949 driver.go:392] Setting default libvirt URI to qemu:///system
	I0725 19:10:32.116555  596949 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0725 19:10:32.116668  596949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0725 19:10:32.221986  596949 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-25 19:10:32.212473141 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0725 19:10:32.222096  596949 docker.go:307] overlay module found
	I0725 19:10:32.224438  596949 out.go:177] * Using the docker driver based on user configuration
	I0725 19:10:32.226103  596949 start.go:297] selected driver: docker
	I0725 19:10:32.226128  596949 start.go:901] validating driver "docker" against <nil>
	I0725 19:10:32.226144  596949 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0725 19:10:32.228684  596949 out.go:177] 
	W0725 19:10:32.230585  596949 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0725 19:10:32.232441  596949 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-212266 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-212266

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-212266

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-212266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-212266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-212266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-212266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-212266

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-212266

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-212266

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-212266

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-212266

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-212266" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-212266" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19326-431487/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 25 Jul 2024 19:10:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-307494
contexts:
- context:
cluster: pause-307494
extensions:
- extension:
last-update: Thu, 25 Jul 2024 19:10:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-307494
name: pause-307494
current-context: pause-307494
kind: Config
preferences: {}
users:
- name: pause-307494
user:
client-certificate: /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/pause-307494/client.crt
client-key: /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/pause-307494/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-212266

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-212266"

                                                
                                                
----------------------- debugLogs end: false-212266 [took: 4.625438987s] --------------------------------
helpers_test.go:175: Cleaning up "false-212266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-212266
--- PASS: TestNetworkPlugins/group/false (5.09s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-307494 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.83s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.15s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-307494 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-307494 --alsologtostderr -v=5: (1.151466132s)
--- PASS: TestPause/serial/PauseAgain (1.15s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-307494 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-307494 --alsologtostderr -v=5: (3.00063531s)
--- PASS: TestPause/serial/DeletePaused (3.00s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-307494
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-307494: exit status 1 (31.227672ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-307494: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (118.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1715774585 start -p stopped-upgrade-663374 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0725 19:18:05.090359  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1715774585 start -p stopped-upgrade-663374 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m0.041604215s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1715774585 -p stopped-upgrade-663374 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1715774585 -p stopped-upgrade-663374 stop: (1.391723189s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-663374 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-663374 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (56.793674252s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (118.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (82.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m22.737727583s)
--- PASS: TestNetworkPlugins/group/auto/Start (82.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-663374
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-663374: (1.688735501s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m15.036436062s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-212266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-212266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tzh87" [76d93d1b-bd1c-4d97-9846-ac415e74b393] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tzh87" [76d93d1b-bd1c-4d97-9846-ac415e74b393] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004065397s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-212266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (80.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0725 19:21:08.138095  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m20.572613385s)
--- PASS: TestNetworkPlugins/group/calico/Start (80.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-29rmr" [60ea1f35-0931-4b35-bb64-f8b525cba3e5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004281521s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-212266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-212266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-58hzz" [25504ef0-2373-456a-8b40-7d9eea3a4a21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-58hzz" [25504ef0-2373-456a-8b40-7d9eea3a4a21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004371041s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-212266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m5.284390952s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-tqx74" [880c3fd5-fcc6-42e9-9882-11e86ba20a30] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006194409s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-212266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-212266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-ddrx2" [0a6c40ea-919c-4dcf-bdeb-70d43aac293a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-ddrx2" [0a6c40ea-919c-4dcf-bdeb-70d43aac293a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.003668748s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-212266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m22.479707834s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-212266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-212266 replace --force -f testdata/netcat-deployment.yaml
E0725 19:23:05.090015  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-tvb8g" [f8369e90-2232-466a-ad2b-a3dd95b346e7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-tvb8g" [f8369e90-2232-466a-ad2b-a3dd95b346e7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003635312s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-212266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0725 19:23:46.618765  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m1.241351499s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-212266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-212266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rv6pf" [8ef0bd5c-8bf1-4ef9-a0a6-6ab3311c8150] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-rv6pf" [8ef0bd5c-8bf1-4ef9-a0a6-6ab3311c8150] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.008914716s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-212266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-ph5xr" [13535eb6-4854-4c45-9fdb-7f56f31c222a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006114052s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-212266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-212266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k2j7x" [20c591df-fd35-4cc9-a2a6-77feca562d53] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-k2j7x" [20c591df-fd35-4cc9-a2a6-77feca562d53] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003525054s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (88.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-212266 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m28.629357117s)
--- PASS: TestNetworkPlugins/group/bridge/Start (88.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-212266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (147.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-262689 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0725 19:25:30.609829  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/auto-212266/client.crt: no such file or directory
E0725 19:25:35.730896  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/auto-212266/client.crt: no such file or directory
E0725 19:25:45.971638  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/auto-212266/client.crt: no such file or directory
E0725 19:26:06.452232  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/auto-212266/client.crt: no such file or directory
E0725 19:26:17.807890  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:26:17.813224  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:26:17.823392  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:26:17.843701  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:26:17.883962  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:26:17.964209  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:26:18.124381  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:26:18.444549  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:26:19.085258  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:26:20.365693  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:26:22.925916  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-262689 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m27.351685672s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (147.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-212266 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-212266 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-c7s26" [086e19e0-c448-45e3-955d-c1564e443f4b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0725 19:26:28.046577  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-c7s26" [086e19e0-c448-45e3-955d-c1564e443f4b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005389072s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-212266 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-212266 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-143817 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
E0725 19:26:58.768536  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:27:19.541333  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:27:19.546587  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:27:19.556899  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:27:19.577207  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:27:19.617451  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:27:19.697729  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:27:19.858115  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:27:20.179027  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:27:20.819252  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:27:22.100122  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:27:24.660645  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:27:29.781301  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:27:39.729280  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:27:40.026347  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-143817 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (1m11.840857054s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-262689 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6fff3f6d-dbf8-4749-98cb-e4557d32d26d] Pending
helpers_test.go:344: "busybox" [6fff3f6d-dbf8-4749-98cb-e4557d32d26d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6fff3f6d-dbf8-4749-98cb-e4557d32d26d] Running
E0725 19:28:00.508154  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003957182s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-262689 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-262689 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-262689 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.097977512s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-262689 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-262689 --alsologtostderr -v=3
E0725 19:28:05.089976  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 19:28:05.400816  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:28:05.406167  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:28:05.416471  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:28:05.436715  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:28:05.476961  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:28:05.557425  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:28:05.717818  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:28:06.038400  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:28:06.678595  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:28:07.959273  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-262689 --alsologtostderr -v=3: (12.144678218s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-143817 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bbab7220-a13d-4e42-ae73-32b86c1d7cab] Pending
E0725 19:28:09.334500  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/auto-212266/client.crt: no such file or directory
helpers_test.go:344: "busybox" [bbab7220-a13d-4e42-ae73-32b86c1d7cab] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0725 19:28:10.520416  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
helpers_test.go:344: "busybox" [bbab7220-a13d-4e42-ae73-32b86c1d7cab] Running
E0725 19:28:15.640973  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005005654s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-143817 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-262689 -n old-k8s-version-262689
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-262689 -n old-k8s-version-262689: exit status 7 (80.577802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-262689 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-143817 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-143817 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.085435921s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-143817 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-143817 --alsologtostderr -v=3
E0725 19:28:25.881522  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-143817 --alsologtostderr -v=3: (12.182810458s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-143817 -n no-preload-143817
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-143817 -n no-preload-143817: exit status 7 (98.039723ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-143817 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (272.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-143817 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
E0725 19:28:41.469178  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:28:46.362059  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:28:46.619175  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 19:29:01.649541  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:29:24.884979  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:24.890482  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:24.900807  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:24.921071  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:24.961278  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:25.041563  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:25.202128  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:25.523000  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:26.163205  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:27.323163  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:29:27.443387  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:30.006752  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:35.127481  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:43.739522  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:29:43.744950  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:29:43.755272  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:29:43.775524  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:29:43.815801  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:29:43.896133  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:29:44.056661  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:29:44.377240  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:29:45.022504  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:29:45.367899  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:29:46.303376  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:29:48.864248  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:29:53.985200  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:30:03.390255  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:30:04.226398  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:30:05.849072  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:30:24.707005  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:30:25.486316  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/auto-212266/client.crt: no such file or directory
E0725 19:30:46.810033  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:30:49.243981  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:30:53.175319  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/auto-212266/client.crt: no such file or directory
E0725 19:31:05.667612  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:31:17.807527  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:31:25.875737  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:31:25.881002  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:31:25.891312  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:31:25.911516  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:31:25.951871  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:31:26.032244  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:31:26.192574  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:31:26.513724  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:31:27.154538  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:31:28.434751  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:31:30.995371  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:31:36.116101  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:31:45.490170  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
E0725 19:31:46.357154  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:32:06.837986  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:32:08.731070  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
E0725 19:32:19.540855  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:32:27.588299  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:32:47.231339  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:32:47.798799  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-143817 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (4m31.644917615s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-143817 -n no-preload-143817
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (272.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-qwxqr" [da0f712a-7bdc-41c7-824c-b7b00c52fa10] Running
E0725 19:33:05.090350  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 19:33:05.399970  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004523197s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-qwxqr" [da0f712a-7bdc-41c7-824c-b7b00c52fa10] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004915391s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-143817 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-143817 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-143817 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-143817 -n no-preload-143817
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-143817 -n no-preload-143817: exit status 2 (345.834884ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-143817 -n no-preload-143817
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-143817 -n no-preload-143817: exit status 2 (338.756724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-143817 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-143817 -n no-preload-143817
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-143817 -n no-preload-143817
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (69.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-240166 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
E0725 19:33:29.664433  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 19:33:33.084207  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:33:46.619415  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 19:34:09.719352  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:34:24.884553  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-240166 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (1m9.841994063s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (69.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-240166 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f9160f98-5691-49ba-a16c-87dfa019af79] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f9160f98-5691-49ba-a16c-87dfa019af79] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003936819s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-240166 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nkqkw" [92aac991-5388-4be6-ab28-fa9c30e3c7e1] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004373496s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-240166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-240166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.0918864s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-240166 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nkqkw" [92aac991-5388-4be6-ab28-fa9c30e3c7e1] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005321539s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-262689 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-240166 --alsologtostderr -v=3
E0725 19:34:43.738866  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-240166 --alsologtostderr -v=3: (12.333007492s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-262689 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-262689 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-262689 -n old-k8s-version-262689
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-262689 -n old-k8s-version-262689: exit status 2 (324.006119ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-262689 -n old-k8s-version-262689
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-262689 -n old-k8s-version-262689: exit status 2 (313.446324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-262689 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-262689 -n old-k8s-version-262689
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-262689 -n old-k8s-version-262689
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-240166 -n embed-certs-240166
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-240166 -n embed-certs-240166: exit status 7 (100.928662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-240166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-263074 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-263074 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (1m7.601325625s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.60s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (270.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-240166 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
E0725 19:35:11.428527  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
E0725 19:35:25.486624  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/auto-212266/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-240166 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (4m30.199279808s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-240166 -n embed-certs-240166
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (270.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-263074 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c59d438b-3c42-402d-8d34-59e76ec9b679] Pending
helpers_test.go:344: "busybox" [c59d438b-3c42-402d-8d34-59e76ec9b679] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c59d438b-3c42-402d-8d34-59e76ec9b679] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004432754s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-263074 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-263074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-263074 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.051918261s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-263074 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-263074 --alsologtostderr -v=3
E0725 19:36:17.807357  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/kindnet-212266/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-263074 --alsologtostderr -v=3: (12.120034585s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-263074 -n default-k8s-diff-port-263074
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-263074 -n default-k8s-diff-port-263074: exit status 7 (76.046039ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-263074 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-263074 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
E0725 19:36:25.875721  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:36:53.559632  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/bridge-212266/client.crt: no such file or directory
E0725 19:37:19.541186  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/calico-212266/client.crt: no such file or directory
E0725 19:37:48.138344  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 19:37:55.913367  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:37:55.918603  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:37:55.928917  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:37:55.949228  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:37:55.989491  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:37:56.070016  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:37:56.230405  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:37:56.550736  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:37:57.191001  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:37:58.471257  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:38:01.031689  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:38:05.089999  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/functional-992537/client.crt: no such file or directory
E0725 19:38:05.400592  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/custom-flannel-212266/client.crt: no such file or directory
E0725 19:38:06.152894  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:38:08.740178  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:08.745415  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:08.755759  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:08.776065  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:08.816344  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:08.896726  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:09.057208  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:09.377720  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:10.018002  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:11.299008  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:13.859454  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:16.393140  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:38:18.980503  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:29.221677  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:38:36.873739  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
E0725 19:38:46.618636  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/addons-673848/client.crt: no such file or directory
E0725 19:38:49.702473  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
E0725 19:39:17.834695  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-263074 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (4m28.41127178s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-263074 -n default-k8s-diff-port-263074
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5x6r2" [4f05eeb2-6dd8-4ce0-936d-11f22ca6ab5a] Running
E0725 19:39:24.884937  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/enable-default-cni-212266/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004068734s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-5x6r2" [4f05eeb2-6dd8-4ce0-936d-11f22ca6ab5a] Running
E0725 19:39:30.662701  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00413709s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-240166 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-240166 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-240166 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-240166 -n embed-certs-240166
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-240166 -n embed-certs-240166: exit status 2 (329.322178ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-240166 -n embed-certs-240166
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-240166 -n embed-certs-240166: exit status 2 (327.501116ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-240166 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-240166 -n embed-certs-240166
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-240166 -n embed-certs-240166
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-638354 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
E0725 19:39:43.739402  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/flannel-212266/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-638354 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (37.643072814s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-638354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-638354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.276336816s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-638354 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-638354 --alsologtostderr -v=3: (1.295641323s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-638354 -n newest-cni-638354
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-638354 -n newest-cni-638354: exit status 7 (74.250162ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-638354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-638354 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
E0725 19:40:25.486576  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/auto-212266/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-638354 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (17.34026985s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-638354 -n newest-cni-638354
E0725 19:40:39.755482  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/old-k8s-version-262689/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-638354 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-638354 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-638354 -n newest-cni-638354
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-638354 -n newest-cni-638354: exit status 2 (336.327262ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-638354 -n newest-cni-638354
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-638354 -n newest-cni-638354: exit status 2 (405.07292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-638354 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-638354 -n newest-cni-638354
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-638354 -n newest-cni-638354
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hxqhc" [45086342-ef13-45a8-a35e-784163debb19] Running
E0725 19:40:52.583579  436893 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/no-preload-143817/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003504135s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-hxqhc" [45086342-ef13-45a8-a35e-784163debb19] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003423925s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-263074 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-263074 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-263074 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-263074 -n default-k8s-diff-port-263074
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-263074 -n default-k8s-diff-port-263074: exit status 2 (317.0739ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-263074 -n default-k8s-diff-port-263074
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-263074 -n default-k8s-diff-port-263074: exit status 2 (338.51527ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-263074 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-263074 -n default-k8s-diff-port-263074
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-263074 -n default-k8s-diff-port-263074
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.03s)

                                                
                                    

Test skip (31/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.98s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-414484 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-414484" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-414484
--- SKIP: TestDownloadOnlyKic (0.98s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-212266 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-212266

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-212266

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-212266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-212266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-212266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-212266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-212266

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-212266

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-212266

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-212266

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-212266

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-212266" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-212266" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19326-431487/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 25 Jul 2024 19:10:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-307494
contexts:
- context:
cluster: pause-307494
extensions:
- extension:
last-update: Thu, 25 Jul 2024 19:10:28 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-307494
name: pause-307494
current-context: pause-307494
kind: Config
preferences: {}
users:
- name: pause-307494
user:
client-certificate: /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/pause-307494/client.crt
client-key: /home/jenkins/minikube-integration/19326-431487/.minikube/profiles/pause-307494/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-212266

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-212266"

                                                
                                                
----------------------- debugLogs end: kubenet-212266 [took: 4.068382978s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-212266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-212266
--- SKIP: TestNetworkPlugins/group/kubenet (4.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-212266 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-212266" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-212266

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-212266" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212266"

                                                
                                                
----------------------- debugLogs end: cilium-212266 [took: 5.134054726s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-212266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-212266
--- SKIP: TestNetworkPlugins/group/cilium (5.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-404843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-404843
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard