Test Report: Docker_Linux_containerd_arm64 19522

                    
                      d15490255971b1813e1f056874620592048fd695:2024-08-28:35972
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 200.13
302 TestStartStop/group/old-k8s-version/serial/SecondStart 374.1
x
+
TestAddons/serial/Volcano (200.13s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 49.245901ms
addons_test.go:897: volcano-scheduler stabilized in 49.378903ms
addons_test.go:913: volcano-controller stabilized in 49.438766ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-dnf49" [91988a25-8ef5-41d8-b857-a71aba4c4dc8] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003453981s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-wjb2c" [f0870e9e-520b-432d-9c7b-f7e1cd9b1cc6] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004019451s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-qq9rb" [01ca3422-e88e-4229-8811-7962aa3c8b77] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003692025s
addons_test.go:932: (dbg) Run:  kubectl --context addons-726754 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-726754 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-726754 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c3bbd2eb-385a-41b4-b65c-dd12312d7cda] Pending
helpers_test.go:344: "test-job-nginx-0" [c3bbd2eb-385a-41b4-b65c-dd12312d7cda] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-726754 -n addons-726754
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-27 23:08:49.780820951 +0000 UTC m=+438.183524608
addons_test.go:964: (dbg) Run:  kubectl --context addons-726754 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-726754 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-a2a04201-5354-4354-80cc-85b050c4b25f
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vnt72 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-vnt72:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-726754 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-726754 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-726754
helpers_test.go:235: (dbg) docker inspect addons-726754:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "319dc7f1aee59bc84eb2c16051b30669bb72533aded72416f1cb1e689f7550fd",
	        "Created": "2024-08-27T23:02:20.564361542Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1740984,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-27T23:02:20.694563375Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0985147309945253cbe7e881ef8b47b2eeae8c92bbeecfbcb5398ea2f50c97c6",
	        "ResolvConfPath": "/var/lib/docker/containers/319dc7f1aee59bc84eb2c16051b30669bb72533aded72416f1cb1e689f7550fd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/319dc7f1aee59bc84eb2c16051b30669bb72533aded72416f1cb1e689f7550fd/hostname",
	        "HostsPath": "/var/lib/docker/containers/319dc7f1aee59bc84eb2c16051b30669bb72533aded72416f1cb1e689f7550fd/hosts",
	        "LogPath": "/var/lib/docker/containers/319dc7f1aee59bc84eb2c16051b30669bb72533aded72416f1cb1e689f7550fd/319dc7f1aee59bc84eb2c16051b30669bb72533aded72416f1cb1e689f7550fd-json.log",
	        "Name": "/addons-726754",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-726754:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-726754",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0d3e668c08e4b0e37ea3e03496a12864147f74c09fde612aef9c0598b023ec6f-init/diff:/var/lib/docker/overlay2/dff060cd4e9382e758ba60bffaaeeca22b78e3466a4ecd4887c9950dd9c3672c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0d3e668c08e4b0e37ea3e03496a12864147f74c09fde612aef9c0598b023ec6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0d3e668c08e4b0e37ea3e03496a12864147f74c09fde612aef9c0598b023ec6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0d3e668c08e4b0e37ea3e03496a12864147f74c09fde612aef9c0598b023ec6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-726754",
	                "Source": "/var/lib/docker/volumes/addons-726754/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-726754",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-726754",
	                "name.minikube.sigs.k8s.io": "addons-726754",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "63efac2b58e97cbdfcdfc0750dae022b5c2e8ce80d22b4c2a24d72c5bee61939",
	            "SandboxKey": "/var/run/docker/netns/63efac2b58e9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33535"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33538"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33536"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-726754": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "9e8c6653f07f7d8ed78cb0de75273ce75b49c92bdeb49b5637651604d53531a3",
	                    "EndpointID": "c8e7fde893fb82a3081332597e79677b93ba6f2a3afabe9e5386d9647f5646c2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-726754",
	                        "319dc7f1aee5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-726754 -n addons-726754
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-726754 logs -n 25: (1.695555581s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-040557   | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC |                     |
	|         | -p download-only-040557              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC | 27 Aug 24 23:01 UTC |
	| delete  | -p download-only-040557              | download-only-040557   | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC | 27 Aug 24 23:01 UTC |
	| start   | -o=json --download-only              | download-only-783356   | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC |                     |
	|         | -p download-only-783356              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC | 27 Aug 24 23:01 UTC |
	| delete  | -p download-only-783356              | download-only-783356   | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC | 27 Aug 24 23:01 UTC |
	| delete  | -p download-only-040557              | download-only-040557   | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC | 27 Aug 24 23:01 UTC |
	| delete  | -p download-only-783356              | download-only-783356   | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC | 27 Aug 24 23:01 UTC |
	| start   | --download-only -p                   | download-docker-558946 | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC |                     |
	|         | download-docker-558946               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-558946            | download-docker-558946 | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC | 27 Aug 24 23:01 UTC |
	| start   | --download-only -p                   | binary-mirror-264953   | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC |                     |
	|         | binary-mirror-264953                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43903               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-264953              | binary-mirror-264953   | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC | 27 Aug 24 23:01 UTC |
	| addons  | disable dashboard -p                 | addons-726754          | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC |                     |
	|         | addons-726754                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-726754          | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC |                     |
	|         | addons-726754                        |                        |         |         |                     |                     |
	| start   | -p addons-726754 --wait=true         | addons-726754          | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC | 27 Aug 24 23:05 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 23:01:56
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 23:01:56.628023 1740482 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:01:56.628213 1740482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:01:56.628227 1740482 out.go:358] Setting ErrFile to fd 2...
	I0827 23:01:56.628234 1740482 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:01:56.628539 1740482 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
	I0827 23:01:56.629043 1740482 out.go:352] Setting JSON to false
	I0827 23:01:56.630040 1740482 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24266,"bootTime":1724775451,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0827 23:01:56.630112 1740482 start.go:139] virtualization:  
	I0827 23:01:56.632511 1740482 out.go:177] * [addons-726754] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0827 23:01:56.634848 1740482 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 23:01:56.634972 1740482 notify.go:220] Checking for updates...
	I0827 23:01:56.638864 1740482 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:01:56.640645 1740482 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:01:56.642322 1740482 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	I0827 23:01:56.644100 1740482 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0827 23:01:56.645870 1740482 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 23:01:56.647949 1740482 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:01:56.673943 1740482 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 23:01:56.674068 1740482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:01:56.742660 1740482 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-27 23:01:56.733300539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:01:56.742769 1740482 docker.go:307] overlay module found
	I0827 23:01:56.744855 1740482 out.go:177] * Using the docker driver based on user configuration
	I0827 23:01:56.746910 1740482 start.go:297] selected driver: docker
	I0827 23:01:56.746929 1740482 start.go:901] validating driver "docker" against <nil>
	I0827 23:01:56.746942 1740482 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 23:01:56.747555 1740482 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:01:56.816695 1740482 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-27 23:01:56.807322573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:01:56.816860 1740482 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 23:01:56.817095 1740482 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 23:01:56.818783 1740482 out.go:177] * Using Docker driver with root privileges
	I0827 23:01:56.820429 1740482 cni.go:84] Creating CNI manager for ""
	I0827 23:01:56.820455 1740482 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0827 23:01:56.820471 1740482 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0827 23:01:56.820545 1740482 start.go:340] cluster config:
	{Name:addons-726754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-726754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:01:56.822554 1740482 out.go:177] * Starting "addons-726754" primary control-plane node in "addons-726754" cluster
	I0827 23:01:56.824343 1740482 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0827 23:01:56.826139 1740482 out.go:177] * Pulling base image v0.0.44-1724667927-19511 ...
	I0827 23:01:56.827901 1740482 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0827 23:01:56.827966 1740482 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0827 23:01:56.827985 1740482 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local docker daemon
	I0827 23:01:56.827989 1740482 cache.go:56] Caching tarball of preloaded images
	I0827 23:01:56.828073 1740482 preload.go:172] Found /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 23:01:56.828083 1740482 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0827 23:01:56.828504 1740482 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/config.json ...
	I0827 23:01:56.828534 1740482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/config.json: {Name:mk64aad033d3c0556d71a4c1d4ff26e28a9622bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:01:56.843001 1740482 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 to local cache
	I0827 23:01:56.843105 1740482 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory
	I0827 23:01:56.843127 1740482 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory, skipping pull
	I0827 23:01:56.843132 1740482 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 exists in cache, skipping pull
	I0827 23:01:56.843143 1740482 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 as a tarball
	I0827 23:01:56.843148 1740482 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 from local cache
	I0827 23:02:14.545904 1740482 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 from cached tarball
	I0827 23:02:14.545945 1740482 cache.go:194] Successfully downloaded all kic artifacts
	I0827 23:02:14.545992 1740482 start.go:360] acquireMachinesLock for addons-726754: {Name:mkbd04d8b968309578ca3eaa7c634587e47bdbce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:02:14.546679 1740482 start.go:364] duration metric: took 660.721µs to acquireMachinesLock for "addons-726754"
	I0827 23:02:14.546718 1740482 start.go:93] Provisioning new machine with config: &{Name:addons-726754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-726754 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0827 23:02:14.546796 1740482 start.go:125] createHost starting for "" (driver="docker")
	I0827 23:02:14.548923 1740482 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0827 23:02:14.549170 1740482 start.go:159] libmachine.API.Create for "addons-726754" (driver="docker")
	I0827 23:02:14.549205 1740482 client.go:168] LocalClient.Create starting
	I0827 23:02:14.549320 1740482 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem
	I0827 23:02:15.034761 1740482 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/cert.pem
	I0827 23:02:15.266459 1740482 cli_runner.go:164] Run: docker network inspect addons-726754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0827 23:02:15.284405 1740482 cli_runner.go:211] docker network inspect addons-726754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0827 23:02:15.284514 1740482 network_create.go:284] running [docker network inspect addons-726754] to gather additional debugging logs...
	I0827 23:02:15.284539 1740482 cli_runner.go:164] Run: docker network inspect addons-726754
	W0827 23:02:15.298727 1740482 cli_runner.go:211] docker network inspect addons-726754 returned with exit code 1
	I0827 23:02:15.298766 1740482 network_create.go:287] error running [docker network inspect addons-726754]: docker network inspect addons-726754: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-726754 not found
	I0827 23:02:15.298780 1740482 network_create.go:289] output of [docker network inspect addons-726754]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-726754 not found
	
	** /stderr **
	I0827 23:02:15.298898 1740482 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0827 23:02:15.314837 1740482 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ae8850}
	I0827 23:02:15.314881 1740482 network_create.go:124] attempt to create docker network addons-726754 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0827 23:02:15.314938 1740482 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-726754 addons-726754
	I0827 23:02:15.386886 1740482 network_create.go:108] docker network addons-726754 192.168.49.0/24 created
	I0827 23:02:15.386921 1740482 kic.go:121] calculated static IP "192.168.49.2" for the "addons-726754" container
	I0827 23:02:15.387011 1740482 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0827 23:02:15.402318 1740482 cli_runner.go:164] Run: docker volume create addons-726754 --label name.minikube.sigs.k8s.io=addons-726754 --label created_by.minikube.sigs.k8s.io=true
	I0827 23:02:15.419633 1740482 oci.go:103] Successfully created a docker volume addons-726754
	I0827 23:02:15.419730 1740482 cli_runner.go:164] Run: docker run --rm --name addons-726754-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-726754 --entrypoint /usr/bin/test -v addons-726754:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 -d /var/lib
	I0827 23:02:16.418197 1740482 oci.go:107] Successfully prepared a docker volume addons-726754
	I0827 23:02:16.418238 1740482 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0827 23:02:16.418261 1740482 kic.go:194] Starting extracting preloaded images to volume ...
	I0827 23:02:16.418345 1740482 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-726754:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 -I lz4 -xf /preloaded.tar -C /extractDir
	I0827 23:02:20.500846 1740482 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-726754:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 -I lz4 -xf /preloaded.tar -C /extractDir: (4.082435575s)
	I0827 23:02:20.500877 1740482 kic.go:203] duration metric: took 4.08261455s to extract preloaded images to volume ...
	W0827 23:02:20.501018 1740482 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0827 23:02:20.501132 1740482 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0827 23:02:20.550551 1740482 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-726754 --name addons-726754 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-726754 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-726754 --network addons-726754 --ip 192.168.49.2 --volume addons-726754:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760
	I0827 23:02:20.852790 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Running}}
	I0827 23:02:20.873013 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:20.893406 1740482 cli_runner.go:164] Run: docker exec addons-726754 stat /var/lib/dpkg/alternatives/iptables
	I0827 23:02:20.966651 1740482 oci.go:144] the created container "addons-726754" has a running status.
	I0827 23:02:20.966680 1740482 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa...
	I0827 23:02:21.312777 1740482 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0827 23:02:21.335350 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:21.356197 1740482 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0827 23:02:21.356222 1740482 kic_runner.go:114] Args: [docker exec --privileged addons-726754 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0827 23:02:21.439909 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:21.464780 1740482 machine.go:93] provisionDockerMachine start ...
	I0827 23:02:21.464894 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:21.482553 1740482 main.go:141] libmachine: Using SSH client type: native
	I0827 23:02:21.482821 1740482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33534 <nil> <nil>}
	I0827 23:02:21.482830 1740482 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 23:02:21.664213 1740482 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-726754
	
	I0827 23:02:21.664246 1740482 ubuntu.go:169] provisioning hostname "addons-726754"
	I0827 23:02:21.664318 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:21.690648 1740482 main.go:141] libmachine: Using SSH client type: native
	I0827 23:02:21.690924 1740482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33534 <nil> <nil>}
	I0827 23:02:21.690943 1740482 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-726754 && echo "addons-726754" | sudo tee /etc/hostname
	I0827 23:02:21.874306 1740482 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-726754
	
	I0827 23:02:21.874480 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:21.899470 1740482 main.go:141] libmachine: Using SSH client type: native
	I0827 23:02:21.899719 1740482 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33534 <nil> <nil>}
	I0827 23:02:21.899735 1740482 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-726754' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-726754/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-726754' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 23:02:22.048965 1740482 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 23:02:22.048992 1740482 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19522-1734325/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-1734325/.minikube}
	I0827 23:02:22.049025 1740482 ubuntu.go:177] setting up certificates
	I0827 23:02:22.049035 1740482 provision.go:84] configureAuth start
	I0827 23:02:22.049101 1740482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-726754
	I0827 23:02:22.066229 1740482 provision.go:143] copyHostCerts
	I0827 23:02:22.066314 1740482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-1734325/.minikube/cert.pem (1123 bytes)
	I0827 23:02:22.066433 1740482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-1734325/.minikube/key.pem (1675 bytes)
	I0827 23:02:22.066499 1740482 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.pem (1078 bytes)
	I0827 23:02:22.066546 1740482 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca-key.pem org=jenkins.addons-726754 san=[127.0.0.1 192.168.49.2 addons-726754 localhost minikube]
	I0827 23:02:22.308961 1740482 provision.go:177] copyRemoteCerts
	I0827 23:02:22.309060 1740482 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 23:02:22.309117 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:22.325859 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:22.424948 1740482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0827 23:02:22.449543 1740482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0827 23:02:22.473288 1740482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0827 23:02:22.498281 1740482 provision.go:87] duration metric: took 449.231925ms to configureAuth
	I0827 23:02:22.498312 1740482 ubuntu.go:193] setting minikube options for container-runtime
	I0827 23:02:22.498511 1740482 config.go:182] Loaded profile config "addons-726754": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:02:22.498525 1740482 machine.go:96] duration metric: took 1.033722936s to provisionDockerMachine
	I0827 23:02:22.498532 1740482 client.go:171] duration metric: took 7.949316507s to LocalClient.Create
	I0827 23:02:22.498553 1740482 start.go:167] duration metric: took 7.949383583s to libmachine.API.Create "addons-726754"
	I0827 23:02:22.498569 1740482 start.go:293] postStartSetup for "addons-726754" (driver="docker")
	I0827 23:02:22.498587 1740482 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 23:02:22.498647 1740482 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 23:02:22.498692 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:22.515277 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:22.619332 1740482 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 23:02:22.623245 1740482 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0827 23:02:22.623281 1740482 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0827 23:02:22.623292 1740482 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0827 23:02:22.623299 1740482 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0827 23:02:22.623309 1740482 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-1734325/.minikube/addons for local assets ...
	I0827 23:02:22.623376 1740482 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-1734325/.minikube/files for local assets ...
	I0827 23:02:22.623402 1740482 start.go:296] duration metric: took 124.82ms for postStartSetup
	I0827 23:02:22.623719 1740482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-726754
	I0827 23:02:22.642750 1740482 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/config.json ...
	I0827 23:02:22.643036 1740482 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:02:22.643097 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:22.660777 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:22.761928 1740482 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0827 23:02:22.768146 1740482 start.go:128] duration metric: took 8.22133343s to createHost
	I0827 23:02:22.768173 1740482 start.go:83] releasing machines lock for "addons-726754", held for 8.221477632s
	I0827 23:02:22.768246 1740482 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-726754
	I0827 23:02:22.786487 1740482 ssh_runner.go:195] Run: cat /version.json
	I0827 23:02:22.786555 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:22.786784 1740482 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 23:02:22.786857 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:22.805255 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:22.816991 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:23.039576 1740482 ssh_runner.go:195] Run: systemctl --version
	I0827 23:02:23.044218 1740482 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0827 23:02:23.049254 1740482 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0827 23:02:23.075620 1740482 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0827 23:02:23.075703 1740482 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 23:02:23.104093 1740482 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0827 23:02:23.104120 1740482 start.go:495] detecting cgroup driver to use...
	I0827 23:02:23.104153 1740482 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0827 23:02:23.104202 1740482 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0827 23:02:23.117187 1740482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0827 23:02:23.129021 1740482 docker.go:217] disabling cri-docker service (if available) ...
	I0827 23:02:23.129131 1740482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 23:02:23.143711 1740482 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 23:02:23.158085 1740482 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 23:02:23.238834 1740482 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 23:02:23.331904 1740482 docker.go:233] disabling docker service ...
	I0827 23:02:23.332020 1740482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 23:02:23.352791 1740482 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 23:02:23.364679 1740482 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 23:02:23.453515 1740482 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 23:02:23.544496 1740482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 23:02:23.556102 1740482 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 23:02:23.573595 1740482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0827 23:02:23.583703 1740482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0827 23:02:23.593882 1740482 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0827 23:02:23.593988 1740482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0827 23:02:23.604082 1740482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 23:02:23.614329 1740482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0827 23:02:23.624565 1740482 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 23:02:23.634759 1740482 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 23:02:23.644337 1740482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0827 23:02:23.654718 1740482 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0827 23:02:23.664979 1740482 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0827 23:02:23.675274 1740482 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 23:02:23.684066 1740482 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 23:02:23.692556 1740482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:02:23.782281 1740482 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0827 23:02:23.907393 1740482 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0827 23:02:23.907482 1740482 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0827 23:02:23.911044 1740482 start.go:563] Will wait 60s for crictl version
	I0827 23:02:23.911105 1740482 ssh_runner.go:195] Run: which crictl
	I0827 23:02:23.914462 1740482 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 23:02:23.955596 1740482 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0827 23:02:23.955677 1740482 ssh_runner.go:195] Run: containerd --version
	I0827 23:02:23.977200 1740482 ssh_runner.go:195] Run: containerd --version
	I0827 23:02:24.001126 1740482 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0827 23:02:24.002651 1740482 cli_runner.go:164] Run: docker network inspect addons-726754 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0827 23:02:24.023081 1740482 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0827 23:02:24.027395 1740482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 23:02:24.040289 1740482 kubeadm.go:883] updating cluster {Name:addons-726754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-726754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 23:02:24.040475 1740482 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0827 23:02:24.040548 1740482 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:02:24.077881 1740482 containerd.go:627] all images are preloaded for containerd runtime.
	I0827 23:02:24.077907 1740482 containerd.go:534] Images already preloaded, skipping extraction
	I0827 23:02:24.077971 1740482 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:02:24.123523 1740482 containerd.go:627] all images are preloaded for containerd runtime.
	I0827 23:02:24.123549 1740482 cache_images.go:84] Images are preloaded, skipping loading
	I0827 23:02:24.123561 1740482 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0827 23:02:24.123669 1740482 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-726754 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-726754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 23:02:24.123744 1740482 ssh_runner.go:195] Run: sudo crictl info
	I0827 23:02:24.160156 1740482 cni.go:84] Creating CNI manager for ""
	I0827 23:02:24.160227 1740482 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0827 23:02:24.160250 1740482 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 23:02:24.160330 1740482 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-726754 NodeName:addons-726754 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 23:02:24.160546 1740482 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-726754"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 23:02:24.160628 1740482 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 23:02:24.169833 1740482 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 23:02:24.169954 1740482 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 23:02:24.178692 1740482 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0827 23:02:24.196834 1740482 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 23:02:24.215537 1740482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0827 23:02:24.233514 1740482 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0827 23:02:24.237089 1740482 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 23:02:24.248049 1740482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:02:24.339359 1740482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 23:02:24.353717 1740482 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754 for IP: 192.168.49.2
	I0827 23:02:24.353739 1740482 certs.go:194] generating shared ca certs ...
	I0827 23:02:24.353755 1740482 certs.go:226] acquiring lock for ca certs: {Name:mkd3d47e0a7419f9dbeb7a4e1a68db1090a3adb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:24.354470 1740482 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.key
	I0827 23:02:24.876636 1740482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.crt ...
	I0827 23:02:24.876669 1740482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.crt: {Name:mk2889b552f38a4afa4cc458033d4165e94b5208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:24.877504 1740482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.key ...
	I0827 23:02:24.877524 1740482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.key: {Name:mkdc2142ec85f1bcaa2c058f81ce45ae59be658b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:24.877621 1740482 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.key
	I0827 23:02:25.901715 1740482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.crt ...
	I0827 23:02:25.901808 1740482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.crt: {Name:mkf24b0e0340b839eae7a612c0be8247a7488339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:25.902650 1740482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.key ...
	I0827 23:02:25.902707 1740482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.key: {Name:mk108ce94435cfe2f9b12c1a142159ab2faa1342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:25.902875 1740482 certs.go:256] generating profile certs ...
	I0827 23:02:25.902982 1740482 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.key
	I0827 23:02:25.903018 1740482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt with IP's: []
	I0827 23:02:26.308642 1740482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt ...
	I0827 23:02:26.308674 1740482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: {Name:mk8abb04963bdb91bbfc6df840b32092ec1cde85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:26.308871 1740482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.key ...
	I0827 23:02:26.308884 1740482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.key: {Name:mkfe570828c0d88aef5600e04e8ebd6bf3d32e87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:26.309532 1740482 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/apiserver.key.59a65641
	I0827 23:02:26.309556 1740482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/apiserver.crt.59a65641 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0827 23:02:26.500272 1740482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/apiserver.crt.59a65641 ...
	I0827 23:02:26.500305 1740482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/apiserver.crt.59a65641: {Name:mka19e514749aec15d03295ab60ac76283da0a0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:26.501031 1740482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/apiserver.key.59a65641 ...
	I0827 23:02:26.501052 1740482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/apiserver.key.59a65641: {Name:mk12e8177fe045947dcc93fad4ccd69bda55134a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:26.501151 1740482 certs.go:381] copying /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/apiserver.crt.59a65641 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/apiserver.crt
	I0827 23:02:26.501236 1740482 certs.go:385] copying /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/apiserver.key.59a65641 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/apiserver.key
	I0827 23:02:26.501305 1740482 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/proxy-client.key
	I0827 23:02:26.501327 1740482 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/proxy-client.crt with IP's: []
	I0827 23:02:27.025079 1740482 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/proxy-client.crt ...
	I0827 23:02:27.025117 1740482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/proxy-client.crt: {Name:mk699453ed207b82472cdf4cbd4f7541af6288c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:27.025315 1740482 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/proxy-client.key ...
	I0827 23:02:27.025329 1740482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/proxy-client.key: {Name:mk4758fa6e49ed1c9710049f3a119b2a23105981 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:27.025512 1740482 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 23:02:27.025563 1740482 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem (1078 bytes)
	I0827 23:02:27.025594 1740482 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/cert.pem (1123 bytes)
	I0827 23:02:27.025622 1740482 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/key.pem (1675 bytes)
	I0827 23:02:27.026342 1740482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 23:02:27.054318 1740482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0827 23:02:27.080356 1740482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 23:02:27.105961 1740482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 23:02:27.131102 1740482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0827 23:02:27.156077 1740482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 23:02:27.180830 1740482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 23:02:27.204179 1740482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 23:02:27.228030 1740482 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 23:02:27.254786 1740482 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 23:02:27.273055 1740482 ssh_runner.go:195] Run: openssl version
	I0827 23:02:27.278776 1740482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 23:02:27.288635 1740482 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:02:27.292098 1740482 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:02:27.292184 1740482 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:02:27.299087 1740482 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 23:02:27.308742 1740482 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 23:02:27.312083 1740482 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0827 23:02:27.312129 1740482 kubeadm.go:392] StartCluster: {Name:addons-726754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-726754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:02:27.312209 1740482 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0827 23:02:27.312268 1740482 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 23:02:27.349303 1740482 cri.go:89] found id: ""
	I0827 23:02:27.349446 1740482 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0827 23:02:27.360700 1740482 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0827 23:02:27.371953 1740482 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0827 23:02:27.372058 1740482 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0827 23:02:27.384549 1740482 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0827 23:02:27.384611 1740482 kubeadm.go:157] found existing configuration files:
	
	I0827 23:02:27.384690 1740482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0827 23:02:27.398422 1740482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0827 23:02:27.398528 1740482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0827 23:02:27.407118 1740482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0827 23:02:27.416300 1740482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0827 23:02:27.416482 1740482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0827 23:02:27.425715 1740482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0827 23:02:27.435933 1740482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0827 23:02:27.436033 1740482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0827 23:02:27.445057 1740482 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0827 23:02:27.454291 1740482 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0827 23:02:27.454388 1740482 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0827 23:02:27.462935 1740482 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0827 23:02:27.503103 1740482 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0827 23:02:27.503436 1740482 kubeadm.go:310] [preflight] Running pre-flight checks
	I0827 23:02:27.522843 1740482 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0827 23:02:27.522919 1740482 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0827 23:02:27.522959 1740482 kubeadm.go:310] OS: Linux
	I0827 23:02:27.523008 1740482 kubeadm.go:310] CGROUPS_CPU: enabled
	I0827 23:02:27.523058 1740482 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0827 23:02:27.523107 1740482 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0827 23:02:27.523157 1740482 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0827 23:02:27.523207 1740482 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0827 23:02:27.523257 1740482 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0827 23:02:27.523304 1740482 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0827 23:02:27.523356 1740482 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0827 23:02:27.523405 1740482 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0827 23:02:27.587139 1740482 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0827 23:02:27.587249 1740482 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0827 23:02:27.587671 1740482 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0827 23:02:27.596762 1740482 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0827 23:02:27.599670 1740482 out.go:235]   - Generating certificates and keys ...
	I0827 23:02:27.599769 1740482 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0827 23:02:27.599838 1740482 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0827 23:02:28.401782 1740482 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0827 23:02:28.584354 1740482 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0827 23:02:29.665161 1740482 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0827 23:02:29.970251 1740482 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0827 23:02:31.057668 1740482 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0827 23:02:31.058018 1740482 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-726754 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0827 23:02:31.572256 1740482 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0827 23:02:31.572434 1740482 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-726754 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0827 23:02:32.065306 1740482 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0827 23:02:32.489581 1740482 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0827 23:02:33.196459 1740482 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0827 23:02:33.196857 1740482 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0827 23:02:33.895350 1740482 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0827 23:02:34.237891 1740482 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0827 23:02:34.775388 1740482 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0827 23:02:35.448775 1740482 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0827 23:02:36.165541 1740482 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0827 23:02:36.167088 1740482 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0827 23:02:36.169438 1740482 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0827 23:02:36.171778 1740482 out.go:235]   - Booting up control plane ...
	I0827 23:02:36.171905 1740482 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0827 23:02:36.171991 1740482 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0827 23:02:36.172595 1740482 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0827 23:02:36.185561 1740482 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0827 23:02:36.192039 1740482 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0827 23:02:36.192101 1740482 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0827 23:02:36.299338 1740482 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0827 23:02:36.299744 1740482 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0827 23:02:37.301715 1740482 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002347003s
	I0827 23:02:37.301801 1740482 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0827 23:02:43.303081 1740482 kubeadm.go:310] [api-check] The API server is healthy after 6.001398635s
	I0827 23:02:43.323471 1740482 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0827 23:02:43.340391 1740482 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0827 23:02:43.372748 1740482 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0827 23:02:43.372939 1740482 kubeadm.go:310] [mark-control-plane] Marking the node addons-726754 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0827 23:02:43.390760 1740482 kubeadm.go:310] [bootstrap-token] Using token: kd7f31.sf1crtudhrh1a6sj
	I0827 23:02:43.392908 1740482 out.go:235]   - Configuring RBAC rules ...
	I0827 23:02:43.393036 1740482 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0827 23:02:43.403285 1740482 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0827 23:02:43.415123 1740482 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0827 23:02:43.419098 1740482 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0827 23:02:43.423741 1740482 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0827 23:02:43.429277 1740482 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0827 23:02:43.710169 1740482 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0827 23:02:44.138176 1740482 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0827 23:02:44.709796 1740482 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0827 23:02:44.711082 1740482 kubeadm.go:310] 
	I0827 23:02:44.711160 1740482 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0827 23:02:44.711170 1740482 kubeadm.go:310] 
	I0827 23:02:44.711245 1740482 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0827 23:02:44.711254 1740482 kubeadm.go:310] 
	I0827 23:02:44.711279 1740482 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0827 23:02:44.711340 1740482 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0827 23:02:44.711399 1740482 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0827 23:02:44.711408 1740482 kubeadm.go:310] 
	I0827 23:02:44.711461 1740482 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0827 23:02:44.711468 1740482 kubeadm.go:310] 
	I0827 23:02:44.711514 1740482 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0827 23:02:44.711522 1740482 kubeadm.go:310] 
	I0827 23:02:44.711573 1740482 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0827 23:02:44.711649 1740482 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0827 23:02:44.711718 1740482 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0827 23:02:44.711727 1740482 kubeadm.go:310] 
	I0827 23:02:44.711807 1740482 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0827 23:02:44.711884 1740482 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0827 23:02:44.711895 1740482 kubeadm.go:310] 
	I0827 23:02:44.711976 1740482 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token kd7f31.sf1crtudhrh1a6sj \
	I0827 23:02:44.712079 1740482 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e879f986fa82e6940b81a691a9ced07ae93cad55f5ba78217e5b550b4c965d8a \
	I0827 23:02:44.712103 1740482 kubeadm.go:310] 	--control-plane 
	I0827 23:02:44.712114 1740482 kubeadm.go:310] 
	I0827 23:02:44.712195 1740482 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0827 23:02:44.712203 1740482 kubeadm.go:310] 
	I0827 23:02:44.712282 1740482 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token kd7f31.sf1crtudhrh1a6sj \
	I0827 23:02:44.712418 1740482 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e879f986fa82e6940b81a691a9ced07ae93cad55f5ba78217e5b550b4c965d8a 
	I0827 23:02:44.716669 1740482 kubeadm.go:310] W0827 23:02:27.499564    1034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0827 23:02:44.716961 1740482 kubeadm.go:310] W0827 23:02:27.500618    1034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0827 23:02:44.717172 1740482 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0827 23:02:44.717282 1740482 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0827 23:02:44.717308 1740482 cni.go:84] Creating CNI manager for ""
	I0827 23:02:44.717320 1740482 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0827 23:02:44.719475 1740482 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0827 23:02:44.721472 1740482 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0827 23:02:44.725338 1740482 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0827 23:02:44.725360 1740482 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0827 23:02:44.745646 1740482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0827 23:02:45.120213 1740482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 23:02:45.120339 1740482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-726754 minikube.k8s.io/updated_at=2024_08_27T23_02_45_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf minikube.k8s.io/name=addons-726754 minikube.k8s.io/primary=true
	I0827 23:02:45.120461 1740482 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0827 23:02:45.372624 1740482 ops.go:34] apiserver oom_adj: -16
	I0827 23:02:45.372720 1740482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 23:02:45.873589 1740482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 23:02:46.372891 1740482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 23:02:46.872870 1740482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 23:02:47.373731 1740482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 23:02:47.873219 1740482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 23:02:48.373415 1740482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 23:02:48.873733 1740482 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0827 23:02:49.029838 1740482 kubeadm.go:1113] duration metric: took 3.90968901s to wait for elevateKubeSystemPrivileges
	I0827 23:02:49.029864 1740482 kubeadm.go:394] duration metric: took 21.717738344s to StartCluster
	I0827 23:02:49.029887 1740482 settings.go:142] acquiring lock: {Name:mk2abdfb376a9e7540e648c96e5aaa1709f13213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:49.030009 1740482 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:02:49.030386 1740482 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/kubeconfig: {Name:mkbc2349839e7e640d3be8c9c9dabdbaf532417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:02:49.031215 1740482 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0827 23:02:49.031387 1740482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0827 23:02:49.031666 1740482 config.go:182] Loaded profile config "addons-726754": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:02:49.031696 1740482 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0827 23:02:49.031769 1740482 addons.go:69] Setting yakd=true in profile "addons-726754"
	I0827 23:02:49.031791 1740482 addons.go:234] Setting addon yakd=true in "addons-726754"
	I0827 23:02:49.031815 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.032315 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.032758 1740482 addons.go:69] Setting inspektor-gadget=true in profile "addons-726754"
	I0827 23:02:49.032785 1740482 addons.go:234] Setting addon inspektor-gadget=true in "addons-726754"
	I0827 23:02:49.032820 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.033278 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.033806 1740482 addons.go:69] Setting cloud-spanner=true in profile "addons-726754"
	I0827 23:02:49.033845 1740482 addons.go:234] Setting addon cloud-spanner=true in "addons-726754"
	I0827 23:02:49.033877 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.034316 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.036307 1740482 addons.go:69] Setting metrics-server=true in profile "addons-726754"
	I0827 23:02:49.036895 1740482 addons.go:234] Setting addon metrics-server=true in "addons-726754"
	I0827 23:02:49.036794 1740482 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-726754"
	I0827 23:02:49.036805 1740482 addons.go:69] Setting default-storageclass=true in profile "addons-726754"
	I0827 23:02:49.036816 1740482 addons.go:69] Setting gcp-auth=true in profile "addons-726754"
	I0827 23:02:49.036820 1740482 addons.go:69] Setting ingress=true in profile "addons-726754"
	I0827 23:02:49.036824 1740482 addons.go:69] Setting ingress-dns=true in profile "addons-726754"
	I0827 23:02:49.039034 1740482 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-726754"
	I0827 23:02:49.039366 1740482 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-726754"
	I0827 23:02:49.040679 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.045922 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.046166 1740482 out.go:177] * Verifying Kubernetes components...
	I0827 23:02:49.046400 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.049723 1740482 addons.go:234] Setting addon ingress=true in "addons-726754"
	I0827 23:02:49.049776 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.050214 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.062404 1740482 addons.go:234] Setting addon ingress-dns=true in "addons-726754"
	I0827 23:02:49.062463 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.062919 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.064557 1740482 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-726754"
	I0827 23:02:49.064595 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.065013 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.039046 1740482 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-726754"
	I0827 23:02:49.082590 1740482 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-726754"
	I0827 23:02:49.082655 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.082940 1740482 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-726754"
	I0827 23:02:49.083175 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.083204 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.095202 1740482 mustload.go:65] Loading cluster: addons-726754
	I0827 23:02:49.095464 1740482 config.go:182] Loaded profile config "addons-726754": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:02:49.039050 1740482 addons.go:69] Setting registry=true in profile "addons-726754"
	I0827 23:02:49.095545 1740482 addons.go:234] Setting addon registry=true in "addons-726754"
	I0827 23:02:49.095581 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.095743 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.039054 1740482 addons.go:69] Setting storage-provisioner=true in profile "addons-726754"
	I0827 23:02:49.098724 1740482 addons.go:234] Setting addon storage-provisioner=true in "addons-726754"
	I0827 23:02:49.098765 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.039058 1740482 addons.go:69] Setting volumesnapshots=true in profile "addons-726754"
	I0827 23:02:49.098893 1740482 addons.go:234] Setting addon volumesnapshots=true in "addons-726754"
	I0827 23:02:49.039061 1740482 addons.go:69] Setting volcano=true in profile "addons-726754"
	I0827 23:02:49.098941 1740482 addons.go:234] Setting addon volcano=true in "addons-726754"
	I0827 23:02:49.099075 1740482 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:02:49.142926 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.143733 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.152941 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.098967 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.162419 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.179545 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.212595 1740482 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0827 23:02:49.215861 1740482 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0827 23:02:49.215958 1740482 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0827 23:02:49.219801 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.234694 1740482 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0827 23:02:49.237107 1740482 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0827 23:02:49.240660 1740482 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0827 23:02:49.240681 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0827 23:02:49.240754 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.254648 1740482 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-726754"
	I0827 23:02:49.255271 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.255835 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.254947 1740482 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0827 23:02:49.282834 1740482 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0827 23:02:49.282920 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.290212 1740482 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0827 23:02:49.306529 1740482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0827 23:02:49.308319 1740482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0827 23:02:49.308902 1740482 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0827 23:02:49.311612 1740482 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0827 23:02:49.311895 1740482 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0827 23:02:49.311909 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0827 23:02:49.311972 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.333703 1740482 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0827 23:02:49.339441 1740482 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0827 23:02:49.339508 1740482 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0827 23:02:49.339617 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.361537 1740482 addons.go:234] Setting addon default-storageclass=true in "addons-726754"
	I0827 23:02:49.361627 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.362136 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:49.366996 1740482 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0827 23:02:49.367020 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0827 23:02:49.367086 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.380342 1740482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0827 23:02:49.380901 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:49.382666 1740482 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0827 23:02:49.383494 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.399494 1740482 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0827 23:02:49.401806 1740482 out.go:177]   - Using image docker.io/busybox:stable
	I0827 23:02:49.403782 1740482 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0827 23:02:49.403848 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0827 23:02:49.404107 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.412904 1740482 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0827 23:02:49.413957 1740482 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0827 23:02:49.420697 1740482 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0827 23:02:49.427567 1740482 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0827 23:02:49.427621 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0827 23:02:49.427700 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.429896 1740482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0827 23:02:49.429922 1740482 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0827 23:02:49.430024 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.430072 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.452325 1740482 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0827 23:02:49.458189 1740482 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0827 23:02:49.458341 1740482 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0827 23:02:49.460635 1740482 out.go:177]   - Using image docker.io/registry:2.8.3
	I0827 23:02:49.460762 1740482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0827 23:02:49.468422 1740482 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0827 23:02:49.468638 1740482 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0827 23:02:49.468654 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0827 23:02:49.468728 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.497884 1740482 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0827 23:02:49.498755 1740482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0827 23:02:49.499348 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.519698 1740482 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 23:02:49.521083 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.526720 1740482 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0827 23:02:49.526747 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0827 23:02:49.526826 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.527684 1740482 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 23:02:49.527709 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0827 23:02:49.527779 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.544621 1740482 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0827 23:02:49.546758 1740482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0827 23:02:49.546784 1740482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0827 23:02:49.546902 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.561384 1740482 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0827 23:02:49.561404 1740482 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0827 23:02:49.561460 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:49.576952 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.578049 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.578359 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.592009 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.631360 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.635485 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.646754 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	W0827 23:02:49.650502 1740482 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0827 23:02:49.650531 1740482 retry.go:31] will retry after 233.847898ms: ssh: handshake failed: EOF
	I0827 23:02:49.664680 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.665453 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.671341 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:49.682789 1740482 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0827 23:02:49.695817 1740482 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W0827 23:02:49.890615 1740482 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0827 23:02:49.890687 1740482 retry.go:31] will retry after 266.767423ms: ssh: handshake failed: EOF
	I0827 23:02:50.246333 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0827 23:02:50.278905 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0827 23:02:50.285455 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0827 23:02:50.296504 1740482 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0827 23:02:50.296587 1740482 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0827 23:02:50.305791 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0827 23:02:50.310615 1740482 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0827 23:02:50.310635 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0827 23:02:50.332321 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 23:02:50.354224 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0827 23:02:50.362469 1740482 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0827 23:02:50.362541 1740482 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0827 23:02:50.425509 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0827 23:02:50.442476 1740482 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0827 23:02:50.442500 1740482 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0827 23:02:50.464277 1740482 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0827 23:02:50.464302 1740482 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0827 23:02:50.479969 1740482 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0827 23:02:50.480037 1740482 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0827 23:02:50.493893 1740482 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0827 23:02:50.493963 1740482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0827 23:02:50.518879 1740482 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0827 23:02:50.518956 1740482 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0827 23:02:50.537766 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0827 23:02:50.549035 1740482 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0827 23:02:50.549110 1740482 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0827 23:02:50.576773 1740482 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0827 23:02:50.576848 1740482 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0827 23:02:50.733366 1740482 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0827 23:02:50.733387 1740482 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0827 23:02:50.833636 1740482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0827 23:02:50.833713 1740482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0827 23:02:50.850128 1740482 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0827 23:02:50.850205 1740482 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0827 23:02:50.855067 1740482 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0827 23:02:50.855182 1740482 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0827 23:02:50.912647 1740482 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0827 23:02:50.912670 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0827 23:02:51.025930 1740482 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0827 23:02:51.026012 1740482 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0827 23:02:51.038208 1740482 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0827 23:02:51.038237 1740482 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0827 23:02:51.111509 1740482 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0827 23:02:51.111531 1740482 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0827 23:02:51.121130 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0827 23:02:51.246333 1740482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0827 23:02:51.246360 1740482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0827 23:02:51.426266 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0827 23:02:51.484518 1740482 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0827 23:02:51.484544 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0827 23:02:51.490048 1740482 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0827 23:02:51.490068 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0827 23:02:51.636240 1740482 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0827 23:02:51.636327 1740482 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0827 23:02:51.767666 1740482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0827 23:02:51.767740 1740482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0827 23:02:51.888954 1740482 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0827 23:02:51.889001 1740482 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0827 23:02:51.910727 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0827 23:02:51.954090 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0827 23:02:52.045145 1740482 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.362318653s)
	I0827 23:02:52.045228 1740482 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0827 23:02:52.046422 1740482 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.350579319s)
	I0827 23:02:52.047530 1740482 node_ready.go:35] waiting up to 6m0s for node "addons-726754" to be "Ready" ...
	I0827 23:02:52.054216 1740482 node_ready.go:49] node "addons-726754" has status "Ready":"True"
	I0827 23:02:52.054293 1740482 node_ready.go:38] duration metric: took 6.700902ms for node "addons-726754" to be "Ready" ...
	I0827 23:02:52.054319 1740482 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:02:52.065816 1740482 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace to be "Ready" ...
	I0827 23:02:52.217450 1740482 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0827 23:02:52.217525 1740482 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0827 23:02:52.356766 1740482 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0827 23:02:52.356829 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0827 23:02:52.384419 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0827 23:02:52.529931 1740482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0827 23:02:52.529994 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0827 23:02:52.549569 1740482 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-726754" context rescaled to 1 replicas
	I0827 23:02:52.881785 1740482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0827 23:02:52.881858 1740482 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0827 23:02:53.144088 1740482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0827 23:02:53.144162 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0827 23:02:53.430353 1740482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0827 23:02:53.430378 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0827 23:02:53.744034 1740482 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0827 23:02:53.744060 1740482 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0827 23:02:54.038386 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0827 23:02:54.121507 1740482 pod_ready.go:103] pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace has status "Ready":"False"
	I0827 23:02:56.130925 1740482 pod_ready.go:103] pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace has status "Ready":"False"
	I0827 23:02:56.597457 1740482 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0827 23:02:56.597533 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:56.635136 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:56.979230 1740482 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0827 23:02:57.161843 1740482 addons.go:234] Setting addon gcp-auth=true in "addons-726754"
	I0827 23:02:57.161893 1740482 host.go:66] Checking if "addons-726754" exists ...
	I0827 23:02:57.162351 1740482 cli_runner.go:164] Run: docker container inspect addons-726754 --format={{.State.Status}}
	I0827 23:02:57.189027 1740482 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0827 23:02:57.189094 1740482 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726754
	I0827 23:02:57.225790 1740482 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/addons-726754/id_rsa Username:docker}
	I0827 23:02:58.598341 1740482 pod_ready.go:103] pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace has status "Ready":"False"
	I0827 23:02:59.543679 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.297304762s)
	I0827 23:02:59.543811 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.264883681s)
	I0827 23:02:59.543827 1740482 addons.go:475] Verifying addon ingress=true in "addons-726754"
	I0827 23:02:59.543966 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.258422635s)
	I0827 23:02:59.544001 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.238191389s)
	I0827 23:02:59.544201 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.211715427s)
	I0827 23:02:59.544249 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.189914886s)
	I0827 23:02:59.544287 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.118713748s)
	I0827 23:02:59.544350 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.00651646s)
	I0827 23:02:59.544505 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.423347412s)
	I0827 23:02:59.544524 1740482 addons.go:475] Verifying addon metrics-server=true in "addons-726754"
	I0827 23:02:59.544664 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.118368899s)
	I0827 23:02:59.544863 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.634111502s)
	I0827 23:02:59.544883 1740482 addons.go:475] Verifying addon registry=true in "addons-726754"
	I0827 23:02:59.545151 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.590986972s)
	W0827 23:02:59.545185 1740482 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0827 23:02:59.545209 1740482 retry.go:31] will retry after 285.173413ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0827 23:02:59.545281 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.16078891s)
	I0827 23:02:59.548674 1740482 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-726754 service yakd-dashboard -n yakd-dashboard
	
	I0827 23:02:59.548781 1740482 out.go:177] * Verifying ingress addon...
	I0827 23:02:59.548814 1740482 out.go:177] * Verifying registry addon...
	I0827 23:02:59.552073 1740482 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0827 23:02:59.552073 1740482 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0827 23:02:59.613576 1740482 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0827 23:02:59.613603 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:02:59.614529 1740482 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0827 23:02:59.614550 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0827 23:02:59.647175 1740482 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0827 23:02:59.830510 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0827 23:03:00.094414 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:00.102821 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:00.576078 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:00.595586 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:00.599657 1740482 pod_ready.go:103] pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace has status "Ready":"False"
	I0827 23:03:00.683480 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.645037986s)
	I0827 23:03:00.683514 1740482 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-726754"
	I0827 23:03:00.683727 1740482 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.494673543s)
	I0827 23:03:00.686687 1740482 out.go:177] * Verifying csi-hostpath-driver addon...
	I0827 23:03:00.686772 1740482 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0827 23:03:00.689375 1740482 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0827 23:03:00.691536 1740482 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0827 23:03:00.693220 1740482 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0827 23:03:00.693248 1740482 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0827 23:03:00.694869 1740482 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0827 23:03:00.694896 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:00.736980 1740482 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0827 23:03:00.737004 1740482 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0827 23:03:00.797895 1740482 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0827 23:03:00.797922 1740482 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0827 23:03:00.826250 1740482 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0827 23:03:01.060423 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:01.061028 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:01.195749 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:01.492120 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.661554912s)
	I0827 23:03:01.557856 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:01.558329 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:01.699920 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:01.834173 1740482 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.007881327s)
	I0827 23:03:01.837033 1740482 addons.go:475] Verifying addon gcp-auth=true in "addons-726754"
	I0827 23:03:01.839227 1740482 out.go:177] * Verifying gcp-auth addon...
	I0827 23:03:01.841642 1740482 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0827 23:03:01.852626 1740482 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0827 23:03:02.058096 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:02.058875 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:02.196167 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:02.558199 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:02.559740 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:02.695421 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:03.058909 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:03.061908 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:03.076295 1740482 pod_ready.go:103] pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace has status "Ready":"False"
	I0827 23:03:03.196044 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:03.559046 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:03.560283 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:03.695039 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:04.056924 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:04.057591 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:04.194364 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:04.560141 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:04.561234 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:04.694517 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:05.061693 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:05.063980 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:05.081032 1740482 pod_ready.go:103] pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace has status "Ready":"False"
	I0827 23:03:05.195716 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:05.559851 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:05.560917 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:05.695405 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:06.058021 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:06.058552 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:06.195025 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:06.558449 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:06.558905 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:06.694077 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:07.058064 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:07.058260 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:07.193935 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:07.557503 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:07.558433 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:07.573958 1740482 pod_ready.go:103] pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace has status "Ready":"False"
	I0827 23:03:07.694626 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:08.058649 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:08.061672 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:08.195169 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:08.556216 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:08.556820 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:08.696097 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:09.059603 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:09.060529 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:09.196115 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:09.569781 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:09.572589 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:09.579043 1740482 pod_ready.go:103] pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace has status "Ready":"False"
	I0827 23:03:09.696654 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:10.079980 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:10.081105 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:10.194690 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:10.558036 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:10.558662 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:10.694768 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:11.057513 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:11.057825 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:11.194577 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:11.557190 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:11.559174 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:11.695456 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:12.056620 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:12.057319 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:12.073704 1740482 pod_ready.go:103] pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace has status "Ready":"False"
	I0827 23:03:12.196693 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:12.557711 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:12.558120 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:12.695117 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:13.056953 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:13.058976 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:13.195008 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:13.557451 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:13.558344 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:13.694422 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:14.057941 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:14.059463 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:14.073864 1740482 pod_ready.go:103] pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace has status "Ready":"False"
	I0827 23:03:14.195060 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:14.557349 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:14.557945 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:14.694660 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:15.064136 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:15.065291 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:15.195753 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:15.559482 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:15.578372 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:15.693933 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:16.059799 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:16.062532 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:16.074804 1740482 pod_ready.go:103] pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace has status "Ready":"False"
	I0827 23:03:16.195350 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:16.560281 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:16.563770 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:16.575032 1740482 pod_ready.go:93] pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace has status "Ready":"True"
	I0827 23:03:16.575058 1740482 pod_ready.go:82] duration metric: took 24.509168296s for pod "coredns-6f6b679f8f-fd2n8" in "kube-system" namespace to be "Ready" ...
	I0827 23:03:16.575069 1740482 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-pr4r9" in "kube-system" namespace to be "Ready" ...
	I0827 23:03:16.578075 1740482 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-pr4r9" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-pr4r9" not found
	I0827 23:03:16.578102 1740482 pod_ready.go:82] duration metric: took 3.025789ms for pod "coredns-6f6b679f8f-pr4r9" in "kube-system" namespace to be "Ready" ...
	E0827 23:03:16.578114 1740482 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-pr4r9" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-pr4r9" not found
	I0827 23:03:16.578121 1740482 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-726754" in "kube-system" namespace to be "Ready" ...
	I0827 23:03:16.586391 1740482 pod_ready.go:93] pod "etcd-addons-726754" in "kube-system" namespace has status "Ready":"True"
	I0827 23:03:16.586416 1740482 pod_ready.go:82] duration metric: took 8.286822ms for pod "etcd-addons-726754" in "kube-system" namespace to be "Ready" ...
	I0827 23:03:16.586430 1740482 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-726754" in "kube-system" namespace to be "Ready" ...
	I0827 23:03:16.592607 1740482 pod_ready.go:93] pod "kube-apiserver-addons-726754" in "kube-system" namespace has status "Ready":"True"
	I0827 23:03:16.592632 1740482 pod_ready.go:82] duration metric: took 6.194099ms for pod "kube-apiserver-addons-726754" in "kube-system" namespace to be "Ready" ...
	I0827 23:03:16.592645 1740482 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-726754" in "kube-system" namespace to be "Ready" ...
	I0827 23:03:16.597945 1740482 pod_ready.go:93] pod "kube-controller-manager-addons-726754" in "kube-system" namespace has status "Ready":"True"
	I0827 23:03:16.597972 1740482 pod_ready.go:82] duration metric: took 5.319656ms for pod "kube-controller-manager-addons-726754" in "kube-system" namespace to be "Ready" ...
	I0827 23:03:16.597985 1740482 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-bb76v" in "kube-system" namespace to be "Ready" ...
	I0827 23:03:16.695091 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:16.769897 1740482 pod_ready.go:93] pod "kube-proxy-bb76v" in "kube-system" namespace has status "Ready":"True"
	I0827 23:03:16.769923 1740482 pod_ready.go:82] duration metric: took 171.930752ms for pod "kube-proxy-bb76v" in "kube-system" namespace to be "Ready" ...
	I0827 23:03:16.769934 1740482 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-726754" in "kube-system" namespace to be "Ready" ...
	I0827 23:03:17.058459 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:17.059002 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:17.171550 1740482 pod_ready.go:93] pod "kube-scheduler-addons-726754" in "kube-system" namespace has status "Ready":"True"
	I0827 23:03:17.171626 1740482 pod_ready.go:82] duration metric: took 401.682834ms for pod "kube-scheduler-addons-726754" in "kube-system" namespace to be "Ready" ...
	I0827 23:03:17.171651 1740482 pod_ready.go:39] duration metric: took 25.11730722s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:03:17.171678 1740482 api_server.go:52] waiting for apiserver process to appear ...
	I0827 23:03:17.171792 1740482 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:03:17.196451 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:17.199083 1740482 api_server.go:72] duration metric: took 28.167826495s to wait for apiserver process to appear ...
	I0827 23:03:17.199155 1740482 api_server.go:88] waiting for apiserver healthz status ...
	I0827 23:03:17.199192 1740482 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0827 23:03:17.208339 1740482 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0827 23:03:17.209489 1740482 api_server.go:141] control plane version: v1.31.0
	I0827 23:03:17.209560 1740482 api_server.go:131] duration metric: took 10.383712ms to wait for apiserver health ...
	I0827 23:03:17.209583 1740482 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 23:03:17.381378 1740482 system_pods.go:59] 18 kube-system pods found
	I0827 23:03:17.381459 1740482 system_pods.go:61] "coredns-6f6b679f8f-fd2n8" [400d6a78-21c9-4db2-86c4-b3bbfae9dd22] Running
	I0827 23:03:17.381484 1740482 system_pods.go:61] "csi-hostpath-attacher-0" [e7db069e-9746-4dab-a72a-83196f9b77de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0827 23:03:17.381508 1740482 system_pods.go:61] "csi-hostpath-resizer-0" [cb02f8e3-50ce-4453-9e13-8c268f0f8981] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0827 23:03:17.381547 1740482 system_pods.go:61] "csi-hostpathplugin-c28vf" [f7a28b36-7024-497b-8bf1-831e8bbdbdb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0827 23:03:17.381569 1740482 system_pods.go:61] "etcd-addons-726754" [943a3c44-130c-4253-a264-fa392915e4c1] Running
	I0827 23:03:17.381588 1740482 system_pods.go:61] "kindnet-skmvw" [95af43bc-5a22-4726-b71b-21c99cd592b0] Running
	I0827 23:03:17.381606 1740482 system_pods.go:61] "kube-apiserver-addons-726754" [c0ffad5a-b37f-4f4b-93bb-cbdd9dde272d] Running
	I0827 23:03:17.381637 1740482 system_pods.go:61] "kube-controller-manager-addons-726754" [0919c849-a4b4-4738-b4f6-9099fa9329cc] Running
	I0827 23:03:17.381658 1740482 system_pods.go:61] "kube-ingress-dns-minikube" [55708f08-d80b-4b41-97d2-f5be9ff5eca1] Running
	I0827 23:03:17.381676 1740482 system_pods.go:61] "kube-proxy-bb76v" [5bc8508f-a216-4ffe-be15-e108c2b52a93] Running
	I0827 23:03:17.381693 1740482 system_pods.go:61] "kube-scheduler-addons-726754" [c7dc6fcf-6b09-4c91-b35a-ca289a961bbd] Running
	I0827 23:03:17.381724 1740482 system_pods.go:61] "metrics-server-8988944d9-9zq2h" [68f66677-d6fe-47b3-bcbe-1d75b9c9a49f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0827 23:03:17.381747 1740482 system_pods.go:61] "nvidia-device-plugin-daemonset-j96qf" [470c2134-9d9e-48ff-89d0-bf973f12a637] Running
	I0827 23:03:17.381768 1740482 system_pods.go:61] "registry-6fb4cdfc84-97j86" [007aa8f4-d9b8-4f63-a5b1-8327bb01249d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0827 23:03:17.381788 1740482 system_pods.go:61] "registry-proxy-dk9mv" [2e38d268-dcf1-4880-8d79-499746cf6bfe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0827 23:03:17.381823 1740482 system_pods.go:61] "snapshot-controller-56fcc65765-bth5s" [8b073fe3-b812-4b31-8233-f9a3766dbebb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0827 23:03:17.381846 1740482 system_pods.go:61] "snapshot-controller-56fcc65765-c6mmv" [625a5a99-0148-46b3-b7a8-6667e4f6678b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0827 23:03:17.381863 1740482 system_pods.go:61] "storage-provisioner" [f16df684-4726-42ae-ba5b-c51c5cb24c86] Running
	I0827 23:03:17.381883 1740482 system_pods.go:74] duration metric: took 172.282243ms to wait for pod list to return data ...
	I0827 23:03:17.381903 1740482 default_sa.go:34] waiting for default service account to be created ...
	I0827 23:03:17.557217 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:17.558143 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:17.569904 1740482 default_sa.go:45] found service account: "default"
	I0827 23:03:17.569936 1740482 default_sa.go:55] duration metric: took 188.007427ms for default service account to be created ...
	I0827 23:03:17.569946 1740482 system_pods.go:116] waiting for k8s-apps to be running ...
	I0827 23:03:17.694584 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:17.777582 1740482 system_pods.go:86] 18 kube-system pods found
	I0827 23:03:17.777618 1740482 system_pods.go:89] "coredns-6f6b679f8f-fd2n8" [400d6a78-21c9-4db2-86c4-b3bbfae9dd22] Running
	I0827 23:03:17.777630 1740482 system_pods.go:89] "csi-hostpath-attacher-0" [e7db069e-9746-4dab-a72a-83196f9b77de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0827 23:03:17.777639 1740482 system_pods.go:89] "csi-hostpath-resizer-0" [cb02f8e3-50ce-4453-9e13-8c268f0f8981] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0827 23:03:17.777647 1740482 system_pods.go:89] "csi-hostpathplugin-c28vf" [f7a28b36-7024-497b-8bf1-831e8bbdbdb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0827 23:03:17.777652 1740482 system_pods.go:89] "etcd-addons-726754" [943a3c44-130c-4253-a264-fa392915e4c1] Running
	I0827 23:03:17.777658 1740482 system_pods.go:89] "kindnet-skmvw" [95af43bc-5a22-4726-b71b-21c99cd592b0] Running
	I0827 23:03:17.777664 1740482 system_pods.go:89] "kube-apiserver-addons-726754" [c0ffad5a-b37f-4f4b-93bb-cbdd9dde272d] Running
	I0827 23:03:17.777670 1740482 system_pods.go:89] "kube-controller-manager-addons-726754" [0919c849-a4b4-4738-b4f6-9099fa9329cc] Running
	I0827 23:03:17.777676 1740482 system_pods.go:89] "kube-ingress-dns-minikube" [55708f08-d80b-4b41-97d2-f5be9ff5eca1] Running
	I0827 23:03:17.777681 1740482 system_pods.go:89] "kube-proxy-bb76v" [5bc8508f-a216-4ffe-be15-e108c2b52a93] Running
	I0827 23:03:17.777695 1740482 system_pods.go:89] "kube-scheduler-addons-726754" [c7dc6fcf-6b09-4c91-b35a-ca289a961bbd] Running
	I0827 23:03:17.777702 1740482 system_pods.go:89] "metrics-server-8988944d9-9zq2h" [68f66677-d6fe-47b3-bcbe-1d75b9c9a49f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0827 23:03:17.777709 1740482 system_pods.go:89] "nvidia-device-plugin-daemonset-j96qf" [470c2134-9d9e-48ff-89d0-bf973f12a637] Running
	I0827 23:03:17.777716 1740482 system_pods.go:89] "registry-6fb4cdfc84-97j86" [007aa8f4-d9b8-4f63-a5b1-8327bb01249d] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0827 23:03:17.777724 1740482 system_pods.go:89] "registry-proxy-dk9mv" [2e38d268-dcf1-4880-8d79-499746cf6bfe] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0827 23:03:17.777731 1740482 system_pods.go:89] "snapshot-controller-56fcc65765-bth5s" [8b073fe3-b812-4b31-8233-f9a3766dbebb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0827 23:03:17.777738 1740482 system_pods.go:89] "snapshot-controller-56fcc65765-c6mmv" [625a5a99-0148-46b3-b7a8-6667e4f6678b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0827 23:03:17.777745 1740482 system_pods.go:89] "storage-provisioner" [f16df684-4726-42ae-ba5b-c51c5cb24c86] Running
	I0827 23:03:17.777753 1740482 system_pods.go:126] duration metric: took 207.80162ms to wait for k8s-apps to be running ...
	I0827 23:03:17.777760 1740482 system_svc.go:44] waiting for kubelet service to be running ....
	I0827 23:03:17.777821 1740482 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:03:17.796460 1740482 system_svc.go:56] duration metric: took 18.689346ms WaitForService to wait for kubelet
	I0827 23:03:17.796489 1740482 kubeadm.go:582] duration metric: took 28.765238945s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 23:03:17.796510 1740482 node_conditions.go:102] verifying NodePressure condition ...
	I0827 23:03:17.970723 1740482 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0827 23:03:17.970755 1740482 node_conditions.go:123] node cpu capacity is 2
	I0827 23:03:17.970769 1740482 node_conditions.go:105] duration metric: took 174.253983ms to run NodePressure ...
	I0827 23:03:17.970783 1740482 start.go:241] waiting for startup goroutines ...
	I0827 23:03:18.058919 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:18.060459 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:18.202098 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:18.558061 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:18.558877 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:18.709921 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:19.058609 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:19.059557 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:19.195241 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:19.556467 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:19.557764 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:19.695306 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:20.057872 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:20.059794 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:20.198263 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:20.557699 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:20.558229 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:20.693956 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:21.069831 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:21.071402 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:21.195823 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:21.560009 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:21.563061 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:21.697734 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:22.059182 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:22.062002 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:22.194752 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:22.558326 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:22.558294 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:22.694668 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:23.059718 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:23.060761 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:23.197631 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:23.557076 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:23.557580 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:23.695888 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:24.057328 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:24.057979 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:24.194339 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:24.557422 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:24.558659 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:24.694462 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:25.056866 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:25.058158 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:25.196245 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:25.557211 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:25.558127 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:25.695305 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:26.057772 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:26.058167 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:26.194818 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:26.564018 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:26.565716 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:26.694600 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:27.057310 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:27.058290 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:27.194231 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:27.557915 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:27.558159 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:27.703532 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:28.058167 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:28.060492 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:28.195236 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:28.568958 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:28.581645 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:28.700289 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:29.058951 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:29.060212 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:29.195647 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:29.561137 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:29.562841 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:29.702745 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:30.059369 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:30.059700 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:30.200390 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:30.563116 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:30.563835 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:30.695669 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:31.058021 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:31.059322 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:31.194786 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:31.558732 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:31.559413 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:31.695187 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:32.057649 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:32.058575 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:32.195336 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:32.556954 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:32.558893 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:32.694865 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:33.057481 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:33.058482 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:33.195364 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:33.557205 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0827 23:03:33.558125 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:33.694536 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:34.057312 1740482 kapi.go:107] duration metric: took 34.505236577s to wait for kubernetes.io/minikube-addons=registry ...
	I0827 23:03:34.058305 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:34.194222 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:34.557184 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:34.695514 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:35.061344 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:35.195646 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:35.557636 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:35.694407 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:36.060033 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:36.195959 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:36.556545 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:36.694874 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:37.062739 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:37.195882 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:37.557440 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:37.698594 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:38.058118 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:38.194832 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:38.557797 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:38.697040 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:39.056683 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:39.194175 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:39.557538 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:39.694404 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:40.076945 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:40.195416 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:40.557614 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:40.695206 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:41.056291 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:41.199141 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:41.557222 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:41.693947 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:42.057793 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:42.195692 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:42.556348 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:42.694346 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:43.058129 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:43.195126 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:43.557999 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:43.696024 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:44.057734 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:44.194593 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:44.557262 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:44.693673 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:45.074258 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:45.195570 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:45.556804 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:45.695197 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:46.056590 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:46.193851 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:46.557712 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:46.694669 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:47.056574 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:47.194602 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:47.557816 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:47.694394 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:48.057099 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:48.194835 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:48.558544 1740482 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0827 23:03:48.702199 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:49.062476 1740482 kapi.go:107] duration metric: took 49.510403996s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0827 23:03:49.198150 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:49.708900 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:50.195279 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:50.695157 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:51.196636 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:51.696650 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:52.194931 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:52.694765 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:53.194979 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:53.694392 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:54.194977 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:54.695234 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:55.195308 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:55.694708 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:56.194489 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:56.694700 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0827 23:03:57.194696 1740482 kapi.go:107] duration metric: took 56.505322975s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0827 23:04:25.345691 1740482 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0827 23:04:25.345716 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:25.845841 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:26.345599 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:26.845384 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:27.345459 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:27.845464 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:28.345594 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:28.844974 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:29.345422 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:29.846363 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:30.345403 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:30.844752 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:31.345109 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:31.845136 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:32.344977 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:32.845599 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:33.344755 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:33.845193 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:34.344631 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:34.845559 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:35.346018 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:35.845534 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:36.345581 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:36.846470 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:37.345352 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:37.847200 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:38.344854 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:38.845703 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:39.345647 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:39.846173 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:40.345391 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:40.845274 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:41.345345 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:41.846099 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:42.347404 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:42.845307 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:43.345941 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:43.845872 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:44.345750 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:44.845529 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:45.346994 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:45.845644 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:46.345471 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:46.845768 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:47.345929 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:47.845879 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:48.345101 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:48.845626 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:49.344960 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:49.846354 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:50.345966 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:50.845045 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:51.345567 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:51.846001 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:52.346019 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:52.845571 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:53.344936 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:53.846411 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:54.345159 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:54.845557 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:55.344858 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:55.845972 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:56.345191 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:56.845950 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:57.346159 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:57.845525 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:58.345301 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:58.844950 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:59.345984 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:04:59.846118 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:00.351286 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:00.845566 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:01.345764 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:01.846416 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:02.346090 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:02.845269 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:03.346479 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:03.845244 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:04.345217 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:04.844538 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:05.346163 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:05.845831 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:06.345507 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:06.846156 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:07.345096 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:07.845926 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:08.346142 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:08.845637 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:09.344922 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:09.845371 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:10.345432 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:10.846397 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:11.344823 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:11.845761 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:12.346162 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:12.845184 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:13.345286 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:13.845499 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:14.345616 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:14.845591 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:15.345706 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:15.845832 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:16.345445 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:16.846733 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:17.345951 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:17.845672 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:18.345291 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:18.844930 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:19.345406 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:19.852567 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:20.345982 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:20.846035 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:21.346405 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:21.848942 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:22.346191 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:22.845194 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:23.345508 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:23.844781 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:24.345856 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:24.845920 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:25.345582 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:25.845690 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:26.345130 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:26.845204 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:27.345098 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:27.846155 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:28.344892 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:28.845247 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:29.346126 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:29.844689 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:30.346073 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:30.845565 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:31.344963 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:31.847373 1740482 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0827 23:05:32.345374 1740482 kapi.go:107] duration metric: took 2m30.503735655s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0827 23:05:32.347019 1740482 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-726754 cluster.
	I0827 23:05:32.349304 1740482 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0827 23:05:32.351335 1740482 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0827 23:05:32.353504 1740482 out.go:177] * Enabled addons: volcano, nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0827 23:05:32.355249 1740482 addons.go:510] duration metric: took 2m43.32354382s for enable addons: enabled=[volcano nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0827 23:05:32.355306 1740482 start.go:246] waiting for cluster config update ...
	I0827 23:05:32.355339 1740482 start.go:255] writing updated cluster config ...
	I0827 23:05:32.355691 1740482 ssh_runner.go:195] Run: rm -f paused
	I0827 23:05:32.693868 1740482 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0827 23:05:32.696044 1740482 out.go:177] * Done! kubectl is now configured to use "addons-726754" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	5ee82f965620c       e2d3313f65753       2 minutes ago       Exited              gadget                                   5                   445bae0c40b70       gadget-b9kcp
	7090acf9a6e58       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   6f9766d3ed3ca       gcp-auth-89d5ffd79-wpbz7
	b1f2b62e732cc       8b46b1cd48760       4 minutes ago       Running             admission                                0                   0f3c31057c37e       volcano-admission-77d7d48b68-wjb2c
	7033d9b25f3ae       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   1e7033a883b90       csi-hostpathplugin-c28vf
	5f6670fcfc593       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   1e7033a883b90       csi-hostpathplugin-c28vf
	cdf2510c91c39       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   1e7033a883b90       csi-hostpathplugin-c28vf
	05bf03fd22b82       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   1e7033a883b90       csi-hostpathplugin-c28vf
	44e0c842ead4e       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   1e7033a883b90       csi-hostpathplugin-c28vf
	c60743375e6f4       289a818c8d9c5       5 minutes ago       Running             controller                               0                   d474e1d1c9081       ingress-nginx-controller-bc57996ff-dp2pz
	31b08ac336c0c       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   1e7033a883b90       csi-hostpathplugin-c28vf
	040b14028d8b8       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   0c09500244314       volcano-scheduler-576bc46687-dnf49
	db81f5bd8a5a1       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   4a03e408522c4       csi-hostpath-attacher-0
	8c12136263b60       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   5901a70dc4376       csi-hostpath-resizer-0
	b2d6229ec15eb       420193b27261a       5 minutes ago       Exited              patch                                    0                   3cb6170eaeeb1       ingress-nginx-admission-patch-jj44v
	c8a1c3768a8ac       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   3d13cfe91d859       volcano-controllers-56675bb4d5-qq9rb
	6eb8e3898f9ba       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   4f5edd525458c       registry-proxy-dk9mv
	cc5c8d28f0ba4       95dccb4df54ab       5 minutes ago       Running             metrics-server                           0                   741bc4ccc9f79       metrics-server-8988944d9-9zq2h
	3b459d46f62b6       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   c98aebce9f5e8       snapshot-controller-56fcc65765-bth5s
	a1ff5da1bc071       6fed88f43b276       5 minutes ago       Running             registry                                 0                   f539b33a149a1       registry-6fb4cdfc84-97j86
	5c165b2e4f347       420193b27261a       5 minutes ago       Exited              create                                   0                   764d9bb8b054c       ingress-nginx-admission-create-xwvsx
	6a84843970522       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   e28559ed2e0db       snapshot-controller-56fcc65765-c6mmv
	cd505f81e9e50       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   3e35847cb1b07       local-path-provisioner-86d989889c-q9mbm
	7bd054feb119a       77bdba588b953       5 minutes ago       Running             yakd                                     0                   7f6e32b8817d0       yakd-dashboard-67d98fc6b-x8759
	4ba1fd01a9a65       2437cf7621777       5 minutes ago       Running             coredns                                  0                   f07e355bb1124       coredns-6f6b679f8f-fd2n8
	0e4cedd4866a0       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   5c3b346834461       cloud-spanner-emulator-769b77f747-vspvs
	b77eedbb800ae       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   1f07f44af3ffd       nvidia-device-plugin-daemonset-j96qf
	3f0db1d38b863       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   48cfa4c72a503       kube-ingress-dns-minikube
	2567772b4d1b2       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   0b79e18f831f0       storage-provisioner
	3699aee6cec94       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   817cbcc124054       kindnet-skmvw
	ea7a6397986cb       71d55d66fd4ee       6 minutes ago       Running             kube-proxy                               0                   339c1406e9f4b       kube-proxy-bb76v
	46a124943f4b6       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   6498b2702ca6b       kube-controller-manager-addons-726754
	b928a40e5a951       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   a0368bca91fb3       kube-apiserver-addons-726754
	cbbae936eec1c       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   b2f429f6883fc       kube-scheduler-addons-726754
	754d83488f9d8       27e3830e14027       6 minutes ago       Running             etcd                                     0                   b3aa70dedd8b3       etcd-addons-726754
	
	
	==> containerd <==
	Aug 27 23:06:29 addons-726754 containerd[816]: time="2024-08-27T23:06:29.190054895Z" level=info msg="CreateContainer within sandbox \"445bae0c40b70cc1c4a13e81e90bf7ca139d60e725f380f8e8da2378b96e2b8f\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 27 23:06:29 addons-726754 containerd[816]: time="2024-08-27T23:06:29.214579267Z" level=info msg="CreateContainer within sandbox \"445bae0c40b70cc1c4a13e81e90bf7ca139d60e725f380f8e8da2378b96e2b8f\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b\""
	Aug 27 23:06:29 addons-726754 containerd[816]: time="2024-08-27T23:06:29.216064052Z" level=info msg="StartContainer for \"5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b\""
	Aug 27 23:06:29 addons-726754 containerd[816]: time="2024-08-27T23:06:29.278746276Z" level=info msg="StartContainer for \"5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b\" returns successfully"
	Aug 27 23:06:30 addons-726754 containerd[816]: time="2024-08-27T23:06:30.560258203Z" level=info msg="shim disconnected" id=5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b namespace=k8s.io
	Aug 27 23:06:30 addons-726754 containerd[816]: time="2024-08-27T23:06:30.560321595Z" level=warning msg="cleaning up after shim disconnected" id=5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b namespace=k8s.io
	Aug 27 23:06:30 addons-726754 containerd[816]: time="2024-08-27T23:06:30.560332515Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 27 23:06:30 addons-726754 containerd[816]: time="2024-08-27T23:06:30.835213732Z" level=error msg="ExecSync for \"5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Aug 27 23:06:30 addons-726754 containerd[816]: time="2024-08-27T23:06:30.835339046Z" level=error msg="ExecSync for \"5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Aug 27 23:06:30 addons-726754 containerd[816]: time="2024-08-27T23:06:30.836217279Z" level=error msg="ExecSync for \"5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Aug 27 23:06:30 addons-726754 containerd[816]: time="2024-08-27T23:06:30.836236069Z" level=error msg="ExecSync for \"5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Aug 27 23:06:30 addons-726754 containerd[816]: time="2024-08-27T23:06:30.837126249Z" level=error msg="ExecSync for \"5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Aug 27 23:06:30 addons-726754 containerd[816]: time="2024-08-27T23:06:30.837141149Z" level=error msg="ExecSync for \"5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Aug 27 23:06:31 addons-726754 containerd[816]: time="2024-08-27T23:06:31.225799961Z" level=info msg="RemoveContainer for \"4e428c82f591790c8e306eb95f92ae3cb086af2b3995769d7e23e01b22adabf4\""
	Aug 27 23:06:31 addons-726754 containerd[816]: time="2024-08-27T23:06:31.234186636Z" level=info msg="RemoveContainer for \"4e428c82f591790c8e306eb95f92ae3cb086af2b3995769d7e23e01b22adabf4\" returns successfully"
	Aug 27 23:06:44 addons-726754 containerd[816]: time="2024-08-27T23:06:44.152176726Z" level=info msg="RemoveContainer for \"d945a6093af8f0f890698e04a958e5be212dd4a5bcf0d9327a63596555e4ee1f\""
	Aug 27 23:06:44 addons-726754 containerd[816]: time="2024-08-27T23:06:44.158359847Z" level=info msg="RemoveContainer for \"d945a6093af8f0f890698e04a958e5be212dd4a5bcf0d9327a63596555e4ee1f\" returns successfully"
	Aug 27 23:06:44 addons-726754 containerd[816]: time="2024-08-27T23:06:44.160596525Z" level=info msg="StopPodSandbox for \"b05a79cb2446413fb429a245a0fbd2a83051c7b4e462e6d92cebf2700209db09\""
	Aug 27 23:06:44 addons-726754 containerd[816]: time="2024-08-27T23:06:44.168757860Z" level=info msg="TearDown network for sandbox \"b05a79cb2446413fb429a245a0fbd2a83051c7b4e462e6d92cebf2700209db09\" successfully"
	Aug 27 23:06:44 addons-726754 containerd[816]: time="2024-08-27T23:06:44.168800329Z" level=info msg="StopPodSandbox for \"b05a79cb2446413fb429a245a0fbd2a83051c7b4e462e6d92cebf2700209db09\" returns successfully"
	Aug 27 23:06:44 addons-726754 containerd[816]: time="2024-08-27T23:06:44.169308166Z" level=info msg="RemovePodSandbox for \"b05a79cb2446413fb429a245a0fbd2a83051c7b4e462e6d92cebf2700209db09\""
	Aug 27 23:06:44 addons-726754 containerd[816]: time="2024-08-27T23:06:44.169350865Z" level=info msg="Forcibly stopping sandbox \"b05a79cb2446413fb429a245a0fbd2a83051c7b4e462e6d92cebf2700209db09\""
	Aug 27 23:06:44 addons-726754 containerd[816]: time="2024-08-27T23:06:44.194404407Z" level=info msg="TearDown network for sandbox \"b05a79cb2446413fb429a245a0fbd2a83051c7b4e462e6d92cebf2700209db09\" successfully"
	Aug 27 23:06:44 addons-726754 containerd[816]: time="2024-08-27T23:06:44.200980873Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b05a79cb2446413fb429a245a0fbd2a83051c7b4e462e6d92cebf2700209db09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 27 23:06:44 addons-726754 containerd[816]: time="2024-08-27T23:06:44.201105433Z" level=info msg="RemovePodSandbox \"b05a79cb2446413fb429a245a0fbd2a83051c7b4e462e6d92cebf2700209db09\" returns successfully"
	
	
	==> coredns [4ba1fd01a9a6542fb8a7ef9376cb926b68f4a10e5950ae1fcf0333712b8aae16] <==
	[INFO] 10.244.0.8:53634 - 1007 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000424541s
	[INFO] 10.244.0.8:51272 - 48976 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002526666s
	[INFO] 10.244.0.8:51272 - 30806 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005223847s
	[INFO] 10.244.0.8:58713 - 45014 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00016199s
	[INFO] 10.244.0.8:58713 - 60379 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010289s
	[INFO] 10.244.0.8:39659 - 9349 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000129934s
	[INFO] 10.244.0.8:39659 - 39554 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000194293s
	[INFO] 10.244.0.8:55384 - 62203 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000087719s
	[INFO] 10.244.0.8:55384 - 21756 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000060175s
	[INFO] 10.244.0.8:34319 - 13435 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000208529s
	[INFO] 10.244.0.8:34319 - 20861 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0001326s
	[INFO] 10.244.0.8:46872 - 19571 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003147453s
	[INFO] 10.244.0.8:46872 - 19313 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003738972s
	[INFO] 10.244.0.8:49428 - 28280 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000077192s
	[INFO] 10.244.0.8:49428 - 5239 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000057829s
	[INFO] 10.244.0.24:35022 - 3053 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00296893s
	[INFO] 10.244.0.24:60348 - 58928 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002719433s
	[INFO] 10.244.0.24:48706 - 60817 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000121417s
	[INFO] 10.244.0.24:38613 - 60266 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000110947s
	[INFO] 10.244.0.24:42971 - 29272 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127751s
	[INFO] 10.244.0.24:60275 - 6748 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096892s
	[INFO] 10.244.0.24:39646 - 5287 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002093526s
	[INFO] 10.244.0.24:55583 - 54946 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001778637s
	[INFO] 10.244.0.24:42458 - 65492 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001757812s
	[INFO] 10.244.0.24:36002 - 42481 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001814623s
	
	
	==> describe nodes <==
	Name:               addons-726754
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-726754
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=addons-726754
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T23_02_45_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-726754
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-726754"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 23:02:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-726754
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 23:08:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 23:05:49 +0000   Tue, 27 Aug 2024 23:02:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 23:05:49 +0000   Tue, 27 Aug 2024 23:02:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 23:05:49 +0000   Tue, 27 Aug 2024 23:02:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 23:05:49 +0000   Tue, 27 Aug 2024 23:02:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-726754
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 1cbeafe1fae64eda88c33188e162ec94
	  System UUID:                444e8bd8-5628-4699-9868-c660fde97167
	  Boot ID:                    e72ce5f2-4965-4285-9cc6-e362a4469d8a
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-vspvs     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  gadget                      gadget-b9kcp                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-wpbz7                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-dp2pz    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m54s
	  kube-system                 coredns-6f6b679f8f-fd2n8                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpathplugin-c28vf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 etcd-addons-726754                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m7s
	  kube-system                 kindnet-skmvw                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-726754                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-controller-manager-addons-726754       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-proxy-bb76v                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-726754                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 metrics-server-8988944d9-9zq2h              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m56s
	  kube-system                 nvidia-device-plugin-daemonset-j96qf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-6fb4cdfc84-97j86                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 registry-proxy-dk9mv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 snapshot-controller-56fcc65765-bth5s        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 snapshot-controller-56fcc65765-c6mmv        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  local-path-storage          local-path-provisioner-86d989889c-q9mbm     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  volcano-system              volcano-admission-77d7d48b68-wjb2c          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-controllers-56675bb4d5-qq9rb        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-576bc46687-dnf49          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-x8759              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 6m1s  kube-proxy       
	  Normal   Starting                 6m7s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m7s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m7s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m7s  kubelet          Node addons-726754 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s  kubelet          Node addons-726754 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s  kubelet          Node addons-726754 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s  node-controller  Node addons-726754 event: Registered Node addons-726754 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [754d83488f9d8eea906dd6d6db28d52b7a72366086cf5bc1512c25056623e83b] <==
	{"level":"info","ts":"2024-08-27T23:02:37.701053Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-27T23:02:37.701210Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-27T23:02:37.701222Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-27T23:02:37.702485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-08-27T23:02:37.702584Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-08-27T23:02:38.072419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-27T23:02:38.072581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-27T23:02:38.072625Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-27T23:02:38.072678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-27T23:02:38.072715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-27T23:02:38.072760Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-27T23:02:38.072799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-27T23:02:38.076514Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:02:38.083179Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-726754 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-27T23:02:38.083461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:02:38.083867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-27T23:02:38.084054Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-27T23:02:38.084110Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-27T23:02:38.084911Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:02:38.086095Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-27T23:02:38.093039Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-27T23:02:38.083422Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:02:38.094432Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:02:38.094497Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-27T23:02:38.094185Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [7090acf9a6e581a116b8be006d6a33d743be69d2e9118084e44b222754178ad1] <==
	2024/08/27 23:05:31 GCP Auth Webhook started!
	2024/08/27 23:05:49 Ready to marshal response ...
	2024/08/27 23:05:49 Ready to write response ...
	2024/08/27 23:05:50 Ready to marshal response ...
	2024/08/27 23:05:50 Ready to write response ...
	
	
	==> kernel <==
	 23:08:51 up  6:51,  0 users,  load average: 0.21, 1.19, 2.10
	Linux addons-726754 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3699aee6cec94099b7291a755721d7302609da740d9295062f291959b757b448] <==
	I0827 23:06:42.232535       1 main.go:299] handling current node
	I0827 23:06:52.229547       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0827 23:06:52.229587       1 main.go:299] handling current node
	I0827 23:07:02.236456       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0827 23:07:02.236493       1 main.go:299] handling current node
	I0827 23:07:12.237514       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0827 23:07:12.237553       1 main.go:299] handling current node
	I0827 23:07:22.229911       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0827 23:07:22.229948       1 main.go:299] handling current node
	I0827 23:07:32.236942       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0827 23:07:32.236980       1 main.go:299] handling current node
	I0827 23:07:42.229197       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0827 23:07:42.229326       1 main.go:299] handling current node
	I0827 23:07:52.229580       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0827 23:07:52.229615       1 main.go:299] handling current node
	I0827 23:08:02.231101       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0827 23:08:02.231217       1 main.go:299] handling current node
	I0827 23:08:12.236957       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0827 23:08:12.237059       1 main.go:299] handling current node
	I0827 23:08:22.235256       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0827 23:08:22.235295       1 main.go:299] handling current node
	I0827 23:08:32.232535       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0827 23:08:32.232575       1 main.go:299] handling current node
	I0827 23:08:42.238350       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0827 23:08:42.238612       1 main.go:299] handling current node
	
	
	==> kube-apiserver [b928a40e5a95190ab4a0948363e67efb4961eff123db36de44b5ef95e76d880e] <==
	W0827 23:03:58.988130       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:00.030516       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:01.062422       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:02.163325       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:03.247308       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:04.289160       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:04.847025       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.221.70:443: connect: connection refused
	E0827 23:04:04.847071       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.221.70:443: connect: connection refused" logger="UnhandledError"
	W0827 23:04:04.848770       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:04.890787       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.221.70:443: connect: connection refused
	E0827 23:04:04.890825       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.221.70:443: connect: connection refused" logger="UnhandledError"
	W0827 23:04:04.892586       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:05.373592       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:06.388483       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:07.490971       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:08.497890       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:09.577851       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.3.221:443: connect: connection refused
	W0827 23:04:24.847468       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.221.70:443: connect: connection refused
	E0827 23:04:24.847506       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.221.70:443: connect: connection refused" logger="UnhandledError"
	W0827 23:05:04.857816       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.221.70:443: connect: connection refused
	E0827 23:05:04.857862       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.221.70:443: connect: connection refused" logger="UnhandledError"
	W0827 23:05:04.898356       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.221.70:443: connect: connection refused
	E0827 23:05:04.898401       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.221.70:443: connect: connection refused" logger="UnhandledError"
	I0827 23:05:49.286682       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0827 23:05:49.321386       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [46a124943f4b68360eccc4177e58478bd96c240b4209286dfaabda69572d7e3d] <==
	I0827 23:05:04.885427       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0827 23:05:04.885845       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0827 23:05:04.900611       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0827 23:05:04.911636       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0827 23:05:04.924763       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0827 23:05:04.925110       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0827 23:05:04.937149       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0827 23:05:05.947709       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0827 23:05:05.960854       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0827 23:05:07.074786       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0827 23:05:07.101043       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0827 23:05:08.081128       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0827 23:05:08.088234       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0827 23:05:08.095730       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0827 23:05:08.106180       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0827 23:05:08.114518       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0827 23:05:08.121100       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0827 23:05:32.058992       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="11.924946ms"
	I0827 23:05:32.059687       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="55.334µs"
	I0827 23:05:38.039233       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0827 23:05:38.044333       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0827 23:05:38.108264       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0827 23:05:38.111752       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0827 23:05:48.972098       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0827 23:05:49.026601       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-726754"
	
	
	==> kube-proxy [ea7a6397986cb2310506cf278ff43d5b790abdbd0ecf1ce7957a147c98e4af3a] <==
	I0827 23:02:50.107877       1 server_linux.go:66] "Using iptables proxy"
	I0827 23:02:50.190135       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0827 23:02:50.190220       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0827 23:02:50.300613       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0827 23:02:50.310662       1 server_linux.go:169] "Using iptables Proxier"
	I0827 23:02:50.354286       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0827 23:02:50.354790       1 server.go:483] "Version info" version="v1.31.0"
	I0827 23:02:50.354816       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0827 23:02:50.397604       1 config.go:197] "Starting service config controller"
	I0827 23:02:50.397642       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0827 23:02:50.397664       1 config.go:104] "Starting endpoint slice config controller"
	I0827 23:02:50.397669       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0827 23:02:50.398259       1 config.go:326] "Starting node config controller"
	I0827 23:02:50.398267       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0827 23:02:50.497792       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0827 23:02:50.497835       1 shared_informer.go:320] Caches are synced for service config
	I0827 23:02:50.498362       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [cbbae936eec1c949ad78c48f9f897ea86e11992d4511b1701c6abd70506230a7] <==
	W0827 23:02:42.246138       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0827 23:02:42.246179       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 23:02:42.246464       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0827 23:02:42.246616       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 23:02:42.246807       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0827 23:02:42.246909       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 23:02:42.247610       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0827 23:02:42.248203       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 23:02:42.248232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0827 23:02:42.248581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0827 23:02:42.247809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0827 23:02:42.248773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0827 23:02:42.247969       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0827 23:02:42.248938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 23:02:42.248053       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0827 23:02:42.249040       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 23:02:42.248090       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0827 23:02:42.249134       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 23:02:42.248144       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0827 23:02:42.249229       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0827 23:02:42.248153       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0827 23:02:42.249323       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0827 23:02:42.248164       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0827 23:02:42.249415       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0827 23:02:43.331701       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 27 23:06:51 addons-726754 kubelet[1508]: E0827 23:06:51.063329    1508 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9kcp_gadget(7f17c02a-303b-4b68-bc92-c8cd0ea2c17d)\"" pod="gadget/gadget-b9kcp" podUID="7f17c02a-303b-4b68-bc92-c8cd0ea2c17d"
	Aug 27 23:06:56 addons-726754 kubelet[1508]: I0827 23:06:56.063585    1508 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-j96qf" secret="" err="secret \"gcp-auth\" not found"
	Aug 27 23:07:05 addons-726754 kubelet[1508]: I0827 23:07:05.062575    1508 scope.go:117] "RemoveContainer" containerID="5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b"
	Aug 27 23:07:05 addons-726754 kubelet[1508]: E0827 23:07:05.062820    1508 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9kcp_gadget(7f17c02a-303b-4b68-bc92-c8cd0ea2c17d)\"" pod="gadget/gadget-b9kcp" podUID="7f17c02a-303b-4b68-bc92-c8cd0ea2c17d"
	Aug 27 23:07:18 addons-726754 kubelet[1508]: I0827 23:07:18.062369    1508 scope.go:117] "RemoveContainer" containerID="5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b"
	Aug 27 23:07:18 addons-726754 kubelet[1508]: E0827 23:07:18.062657    1508 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9kcp_gadget(7f17c02a-303b-4b68-bc92-c8cd0ea2c17d)\"" pod="gadget/gadget-b9kcp" podUID="7f17c02a-303b-4b68-bc92-c8cd0ea2c17d"
	Aug 27 23:07:21 addons-726754 kubelet[1508]: I0827 23:07:21.062856    1508 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dk9mv" secret="" err="secret \"gcp-auth\" not found"
	Aug 27 23:07:23 addons-726754 kubelet[1508]: I0827 23:07:23.062695    1508 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-97j86" secret="" err="secret \"gcp-auth\" not found"
	Aug 27 23:07:31 addons-726754 kubelet[1508]: I0827 23:07:31.062759    1508 scope.go:117] "RemoveContainer" containerID="5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b"
	Aug 27 23:07:31 addons-726754 kubelet[1508]: E0827 23:07:31.062991    1508 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9kcp_gadget(7f17c02a-303b-4b68-bc92-c8cd0ea2c17d)\"" pod="gadget/gadget-b9kcp" podUID="7f17c02a-303b-4b68-bc92-c8cd0ea2c17d"
	Aug 27 23:07:44 addons-726754 kubelet[1508]: I0827 23:07:44.063404    1508 scope.go:117] "RemoveContainer" containerID="5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b"
	Aug 27 23:07:44 addons-726754 kubelet[1508]: E0827 23:07:44.063695    1508 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9kcp_gadget(7f17c02a-303b-4b68-bc92-c8cd0ea2c17d)\"" pod="gadget/gadget-b9kcp" podUID="7f17c02a-303b-4b68-bc92-c8cd0ea2c17d"
	Aug 27 23:07:55 addons-726754 kubelet[1508]: I0827 23:07:55.062919    1508 scope.go:117] "RemoveContainer" containerID="5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b"
	Aug 27 23:07:55 addons-726754 kubelet[1508]: E0827 23:07:55.063350    1508 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9kcp_gadget(7f17c02a-303b-4b68-bc92-c8cd0ea2c17d)\"" pod="gadget/gadget-b9kcp" podUID="7f17c02a-303b-4b68-bc92-c8cd0ea2c17d"
	Aug 27 23:08:10 addons-726754 kubelet[1508]: I0827 23:08:10.063483    1508 scope.go:117] "RemoveContainer" containerID="5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b"
	Aug 27 23:08:10 addons-726754 kubelet[1508]: E0827 23:08:10.063674    1508 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9kcp_gadget(7f17c02a-303b-4b68-bc92-c8cd0ea2c17d)\"" pod="gadget/gadget-b9kcp" podUID="7f17c02a-303b-4b68-bc92-c8cd0ea2c17d"
	Aug 27 23:08:21 addons-726754 kubelet[1508]: I0827 23:08:21.062777    1508 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-j96qf" secret="" err="secret \"gcp-auth\" not found"
	Aug 27 23:08:25 addons-726754 kubelet[1508]: I0827 23:08:25.062667    1508 scope.go:117] "RemoveContainer" containerID="5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b"
	Aug 27 23:08:25 addons-726754 kubelet[1508]: E0827 23:08:25.062916    1508 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9kcp_gadget(7f17c02a-303b-4b68-bc92-c8cd0ea2c17d)\"" pod="gadget/gadget-b9kcp" podUID="7f17c02a-303b-4b68-bc92-c8cd0ea2c17d"
	Aug 27 23:08:38 addons-726754 kubelet[1508]: I0827 23:08:38.062605    1508 scope.go:117] "RemoveContainer" containerID="5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b"
	Aug 27 23:08:38 addons-726754 kubelet[1508]: E0827 23:08:38.064254    1508 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9kcp_gadget(7f17c02a-303b-4b68-bc92-c8cd0ea2c17d)\"" pod="gadget/gadget-b9kcp" podUID="7f17c02a-303b-4b68-bc92-c8cd0ea2c17d"
	Aug 27 23:08:39 addons-726754 kubelet[1508]: I0827 23:08:39.063094    1508 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-97j86" secret="" err="secret \"gcp-auth\" not found"
	Aug 27 23:08:49 addons-726754 kubelet[1508]: I0827 23:08:49.062391    1508 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-dk9mv" secret="" err="secret \"gcp-auth\" not found"
	Aug 27 23:08:50 addons-726754 kubelet[1508]: I0827 23:08:50.068665    1508 scope.go:117] "RemoveContainer" containerID="5ee82f965620c61c98c7d0540da34c12e654ca9c2fc4d679140eea2f91c3332b"
	Aug 27 23:08:50 addons-726754 kubelet[1508]: E0827 23:08:50.070781    1508 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9kcp_gadget(7f17c02a-303b-4b68-bc92-c8cd0ea2c17d)\"" pod="gadget/gadget-b9kcp" podUID="7f17c02a-303b-4b68-bc92-c8cd0ea2c17d"
	
	
	==> storage-provisioner [2567772b4d1b2196e16828165eb846cbb51e49dce794114ff65157d08b7778a6] <==
	I0827 23:02:55.058965       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0827 23:02:55.085252       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0827 23:02:55.085383       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0827 23:02:55.108495       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0827 23:02:55.108708       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-726754_e1b3fc11-bc7d-42db-a5c5-f633d8ce05ae!
	I0827 23:02:55.109850       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"68bbf3a4-e11c-4df6-887c-9fe2170d53da", APIVersion:"v1", ResourceVersion:"536", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-726754_e1b3fc11-bc7d-42db-a5c5-f633d8ce05ae became leader
	I0827 23:02:55.210993       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-726754_e1b3fc11-bc7d-42db-a5c5-f633d8ce05ae!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-726754 -n addons-726754
helpers_test.go:261: (dbg) Run:  kubectl --context addons-726754 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-xwvsx ingress-nginx-admission-patch-jj44v test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-726754 describe pod ingress-nginx-admission-create-xwvsx ingress-nginx-admission-patch-jj44v test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-726754 describe pod ingress-nginx-admission-create-xwvsx ingress-nginx-admission-patch-jj44v test-job-nginx-0: exit status 1 (85.761554ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-xwvsx" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-jj44v" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-726754 describe pod ingress-nginx-admission-create-xwvsx ingress-nginx-admission-patch-jj44v test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (200.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (374.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-394049 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0827 23:52:35.569285 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-394049 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m9.717898009s)

                                                
                                                
-- stdout --
	* [old-k8s-version-394049] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-394049" primary control-plane node in "old-k8s-version-394049" cluster
	* Pulling base image v0.0.44-1724667927-19511 ...
	* Restarting existing docker container for "old-k8s-version-394049" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-394049 addons enable metrics-server
	
	* Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:52:02.736896 1945499 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:52:02.737201 1945499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:52:02.737231 1945499 out.go:358] Setting ErrFile to fd 2...
	I0827 23:52:02.737253 1945499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:52:02.737642 1945499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
	I0827 23:52:02.738253 1945499 out.go:352] Setting JSON to false
	I0827 23:52:02.739604 1945499 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":27272,"bootTime":1724775451,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0827 23:52:02.739741 1945499 start.go:139] virtualization:  
	I0827 23:52:02.743561 1945499 out.go:177] * [old-k8s-version-394049] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0827 23:52:02.747038 1945499 notify.go:220] Checking for updates...
	I0827 23:52:02.749669 1945499 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 23:52:02.751917 1945499 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:52:02.754183 1945499 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:52:02.756062 1945499 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	I0827 23:52:02.758574 1945499 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0827 23:52:02.760764 1945499 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 23:52:02.763588 1945499 config.go:182] Loaded profile config "old-k8s-version-394049": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0827 23:52:02.766539 1945499 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0827 23:52:02.768765 1945499 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:52:02.815002 1945499 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 23:52:02.815134 1945499 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:52:02.905821 1945499 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-27 23:52:02.889656145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:52:02.905928 1945499 docker.go:307] overlay module found
	I0827 23:52:02.908034 1945499 out.go:177] * Using the docker driver based on existing profile
	I0827 23:52:02.910131 1945499 start.go:297] selected driver: docker
	I0827 23:52:02.910147 1945499 start.go:901] validating driver "docker" against &{Name:old-k8s-version-394049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-394049 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:52:02.910276 1945499 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 23:52:02.910874 1945499 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:52:02.998135 1945499 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-27 23:52:02.985495161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:52:02.998536 1945499 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 23:52:02.998567 1945499 cni.go:84] Creating CNI manager for ""
	I0827 23:52:02.998576 1945499 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0827 23:52:02.998615 1945499 start.go:340] cluster config:
	{Name:old-k8s-version-394049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-394049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:52:03.000829 1945499 out.go:177] * Starting "old-k8s-version-394049" primary control-plane node in "old-k8s-version-394049" cluster
	I0827 23:52:03.004287 1945499 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0827 23:52:03.010214 1945499 out.go:177] * Pulling base image v0.0.44-1724667927-19511 ...
	I0827 23:52:03.015681 1945499 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0827 23:52:03.015779 1945499 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0827 23:52:03.015804 1945499 cache.go:56] Caching tarball of preloaded images
	I0827 23:52:03.015935 1945499 preload.go:172] Found /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0827 23:52:03.015946 1945499 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0827 23:52:03.016071 1945499 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/config.json ...
	I0827 23:52:03.016323 1945499 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local docker daemon
	W0827 23:52:03.037855 1945499 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 is of wrong architecture
	I0827 23:52:03.037876 1945499 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 to local cache
	I0827 23:52:03.037956 1945499 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory
	I0827 23:52:03.037975 1945499 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory, skipping pull
	I0827 23:52:03.037980 1945499 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 exists in cache, skipping pull
	I0827 23:52:03.037995 1945499 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 as a tarball
	I0827 23:52:03.038001 1945499 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 from local cache
	I0827 23:52:03.167756 1945499 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 from cached tarball
	I0827 23:52:03.167789 1945499 cache.go:194] Successfully downloaded all kic artifacts
	I0827 23:52:03.167832 1945499 start.go:360] acquireMachinesLock for old-k8s-version-394049: {Name:mke5f65788ddde90400b946cf7ce5cd43b90aa17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:52:03.167895 1945499 start.go:364] duration metric: took 42.617µs to acquireMachinesLock for "old-k8s-version-394049"
	I0827 23:52:03.167916 1945499 start.go:96] Skipping create...Using existing machine configuration
	I0827 23:52:03.167922 1945499 fix.go:54] fixHost starting: 
	I0827 23:52:03.168196 1945499 cli_runner.go:164] Run: docker container inspect old-k8s-version-394049 --format={{.State.Status}}
	I0827 23:52:03.184926 1945499 fix.go:112] recreateIfNeeded on old-k8s-version-394049: state=Stopped err=<nil>
	W0827 23:52:03.184955 1945499 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 23:52:03.187243 1945499 out.go:177] * Restarting existing docker container for "old-k8s-version-394049" ...
	I0827 23:52:03.189122 1945499 cli_runner.go:164] Run: docker start old-k8s-version-394049
	I0827 23:52:03.632139 1945499 cli_runner.go:164] Run: docker container inspect old-k8s-version-394049 --format={{.State.Status}}
	I0827 23:52:03.671957 1945499 kic.go:430] container "old-k8s-version-394049" state is running.
	I0827 23:52:03.672332 1945499 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-394049
	I0827 23:52:03.709277 1945499 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/config.json ...
	I0827 23:52:03.709503 1945499 machine.go:93] provisionDockerMachine start ...
	I0827 23:52:03.709569 1945499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-394049
	I0827 23:52:03.748704 1945499 main.go:141] libmachine: Using SSH client type: native
	I0827 23:52:03.748968 1945499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I0827 23:52:03.748977 1945499 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 23:52:03.750336 1945499 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54082->127.0.0.1:33829: read: connection reset by peer
	I0827 23:52:06.900136 1945499 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-394049
	
	I0827 23:52:06.900160 1945499 ubuntu.go:169] provisioning hostname "old-k8s-version-394049"
	I0827 23:52:06.900227 1945499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-394049
	I0827 23:52:06.923849 1945499 main.go:141] libmachine: Using SSH client type: native
	I0827 23:52:06.924088 1945499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I0827 23:52:06.924104 1945499 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-394049 && echo "old-k8s-version-394049" | sudo tee /etc/hostname
	I0827 23:52:07.093110 1945499 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-394049
	
	I0827 23:52:07.093190 1945499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-394049
	I0827 23:52:07.122461 1945499 main.go:141] libmachine: Using SSH client type: native
	I0827 23:52:07.122709 1945499 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I0827 23:52:07.122728 1945499 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-394049' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-394049/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-394049' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 23:52:07.276861 1945499 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 23:52:07.276928 1945499 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19522-1734325/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-1734325/.minikube}
	I0827 23:52:07.276961 1945499 ubuntu.go:177] setting up certificates
	I0827 23:52:07.277000 1945499 provision.go:84] configureAuth start
	I0827 23:52:07.277078 1945499 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-394049
	I0827 23:52:07.306217 1945499 provision.go:143] copyHostCerts
	I0827 23:52:07.306296 1945499 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.pem, removing ...
	I0827 23:52:07.306311 1945499 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.pem
	I0827 23:52:07.306389 1945499 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.pem (1078 bytes)
	I0827 23:52:07.306502 1945499 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-1734325/.minikube/cert.pem, removing ...
	I0827 23:52:07.306513 1945499 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-1734325/.minikube/cert.pem
	I0827 23:52:07.306546 1945499 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-1734325/.minikube/cert.pem (1123 bytes)
	I0827 23:52:07.306608 1945499 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-1734325/.minikube/key.pem, removing ...
	I0827 23:52:07.306618 1945499 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-1734325/.minikube/key.pem
	I0827 23:52:07.306642 1945499 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-1734325/.minikube/key.pem (1675 bytes)
	I0827 23:52:07.306694 1945499 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-394049 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-394049]
	I0827 23:52:07.518250 1945499 provision.go:177] copyRemoteCerts
	I0827 23:52:07.518329 1945499 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 23:52:07.518411 1945499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-394049
	I0827 23:52:07.547996 1945499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/old-k8s-version-394049/id_rsa Username:docker}
	I0827 23:52:07.671308 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0827 23:52:07.700006 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0827 23:52:07.752879 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0827 23:52:07.789704 1945499 provision.go:87] duration metric: took 512.674477ms to configureAuth
	I0827 23:52:07.789750 1945499 ubuntu.go:193] setting minikube options for container-runtime
	I0827 23:52:07.789941 1945499 config.go:182] Loaded profile config "old-k8s-version-394049": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0827 23:52:07.789954 1945499 machine.go:96] duration metric: took 4.08044372s to provisionDockerMachine
	I0827 23:52:07.789962 1945499 start.go:293] postStartSetup for "old-k8s-version-394049" (driver="docker")
	I0827 23:52:07.789978 1945499 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 23:52:07.790027 1945499 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 23:52:07.790069 1945499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-394049
	I0827 23:52:07.814862 1945499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/old-k8s-version-394049/id_rsa Username:docker}
	I0827 23:52:07.930513 1945499 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 23:52:07.934416 1945499 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0827 23:52:07.934454 1945499 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0827 23:52:07.934469 1945499 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0827 23:52:07.934477 1945499 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0827 23:52:07.934491 1945499 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-1734325/.minikube/addons for local assets ...
	I0827 23:52:07.934550 1945499 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-1734325/.minikube/files for local assets ...
	I0827 23:52:07.934636 1945499 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-1734325/.minikube/files/etc/ssl/certs/17397152.pem -> 17397152.pem in /etc/ssl/certs
	I0827 23:52:07.934749 1945499 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 23:52:07.952801 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/files/etc/ssl/certs/17397152.pem --> /etc/ssl/certs/17397152.pem (1708 bytes)
	I0827 23:52:07.993709 1945499 start.go:296] duration metric: took 203.730124ms for postStartSetup
	I0827 23:52:07.993880 1945499 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:52:07.993953 1945499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-394049
	I0827 23:52:08.031045 1945499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/old-k8s-version-394049/id_rsa Username:docker}
	I0827 23:52:08.145431 1945499 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0827 23:52:08.152998 1945499 fix.go:56] duration metric: took 4.985066711s for fixHost
	I0827 23:52:08.153021 1945499 start.go:83] releasing machines lock for "old-k8s-version-394049", held for 4.985117745s
	I0827 23:52:08.153101 1945499 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-394049
	I0827 23:52:08.179555 1945499 ssh_runner.go:195] Run: cat /version.json
	I0827 23:52:08.179621 1945499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-394049
	I0827 23:52:08.179854 1945499 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 23:52:08.179927 1945499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-394049
	I0827 23:52:08.207571 1945499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/old-k8s-version-394049/id_rsa Username:docker}
	I0827 23:52:08.222877 1945499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/old-k8s-version-394049/id_rsa Username:docker}
	I0827 23:52:08.332009 1945499 ssh_runner.go:195] Run: systemctl --version
	I0827 23:52:08.508502 1945499 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0827 23:52:08.513148 1945499 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0827 23:52:08.532169 1945499 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0827 23:52:08.532253 1945499 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 23:52:08.542357 1945499 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0827 23:52:08.542427 1945499 start.go:495] detecting cgroup driver to use...
	I0827 23:52:08.542475 1945499 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0827 23:52:08.542552 1945499 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0827 23:52:08.557872 1945499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0827 23:52:08.571573 1945499 docker.go:217] disabling cri-docker service (if available) ...
	I0827 23:52:08.571683 1945499 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 23:52:08.593593 1945499 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 23:52:08.610315 1945499 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 23:52:08.764292 1945499 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 23:52:08.918359 1945499 docker.go:233] disabling docker service ...
	I0827 23:52:08.918498 1945499 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 23:52:08.939273 1945499 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 23:52:08.960750 1945499 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 23:52:09.106158 1945499 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 23:52:09.250105 1945499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 23:52:09.270764 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 23:52:09.301903 1945499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0827 23:52:09.312872 1945499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0827 23:52:09.330687 1945499 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0827 23:52:09.330806 1945499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0827 23:52:09.347244 1945499 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 23:52:09.361296 1945499 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0827 23:52:09.376803 1945499 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 23:52:09.389188 1945499 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 23:52:09.406637 1945499 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0827 23:52:09.425950 1945499 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 23:52:09.445029 1945499 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 23:52:09.454326 1945499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:52:09.631671 1945499 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0827 23:52:09.973856 1945499 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0827 23:52:09.974010 1945499 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0827 23:52:09.982282 1945499 start.go:563] Will wait 60s for crictl version
	I0827 23:52:09.982397 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:52:09.986737 1945499 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 23:52:10.075333 1945499 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0827 23:52:10.075449 1945499 ssh_runner.go:195] Run: containerd --version
	I0827 23:52:10.132355 1945499 ssh_runner.go:195] Run: containerd --version
	I0827 23:52:10.170730 1945499 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	I0827 23:52:10.173427 1945499 cli_runner.go:164] Run: docker network inspect old-k8s-version-394049 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0827 23:52:10.194150 1945499 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0827 23:52:10.198804 1945499 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 23:52:10.212607 1945499 kubeadm.go:883] updating cluster {Name:old-k8s-version-394049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-394049 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 23:52:10.212747 1945499 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0827 23:52:10.212869 1945499 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:52:10.288990 1945499 containerd.go:627] all images are preloaded for containerd runtime.
	I0827 23:52:10.289017 1945499 containerd.go:534] Images already preloaded, skipping extraction
	I0827 23:52:10.289078 1945499 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:52:10.373819 1945499 containerd.go:627] all images are preloaded for containerd runtime.
	I0827 23:52:10.373838 1945499 cache_images.go:84] Images are preloaded, skipping loading
	I0827 23:52:10.373846 1945499 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0827 23:52:10.373962 1945499 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-394049 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-394049 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 23:52:10.374023 1945499 ssh_runner.go:195] Run: sudo crictl info
	I0827 23:52:10.449400 1945499 cni.go:84] Creating CNI manager for ""
	I0827 23:52:10.449481 1945499 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0827 23:52:10.449507 1945499 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 23:52:10.449560 1945499 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-394049 NodeName:old-k8s-version-394049 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0827 23:52:10.449728 1945499 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-394049"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 23:52:10.449840 1945499 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0827 23:52:10.466016 1945499 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 23:52:10.466165 1945499 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 23:52:10.484472 1945499 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0827 23:52:10.518961 1945499 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 23:52:10.544181 1945499 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0827 23:52:10.583035 1945499 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0827 23:52:10.592977 1945499 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 23:52:10.612025 1945499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:52:10.761920 1945499 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 23:52:10.791709 1945499 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049 for IP: 192.168.76.2
	I0827 23:52:10.791745 1945499 certs.go:194] generating shared ca certs ...
	I0827 23:52:10.791762 1945499 certs.go:226] acquiring lock for ca certs: {Name:mkd3d47e0a7419f9dbeb7a4e1a68db1090a3adb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:52:10.791940 1945499 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.key
	I0827 23:52:10.791999 1945499 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.key
	I0827 23:52:10.792011 1945499 certs.go:256] generating profile certs ...
	I0827 23:52:10.792110 1945499 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.key
	I0827 23:52:10.792190 1945499 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/apiserver.key.e729be15
	I0827 23:52:10.792240 1945499 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/proxy-client.key
	I0827 23:52:10.792405 1945499 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/1739715.pem (1338 bytes)
	W0827 23:52:10.792451 1945499 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/1739715_empty.pem, impossibly tiny 0 bytes
	I0827 23:52:10.792464 1945499 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 23:52:10.792490 1945499 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem (1078 bytes)
	I0827 23:52:10.792524 1945499 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/cert.pem (1123 bytes)
	I0827 23:52:10.792553 1945499 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/key.pem (1675 bytes)
	I0827 23:52:10.792607 1945499 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/files/etc/ssl/certs/17397152.pem (1708 bytes)
	I0827 23:52:10.793380 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 23:52:10.882145 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0827 23:52:10.944855 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 23:52:11.007094 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 23:52:11.070941 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0827 23:52:11.120551 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0827 23:52:11.170729 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 23:52:11.215091 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 23:52:11.242887 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/files/etc/ssl/certs/17397152.pem --> /usr/share/ca-certificates/17397152.pem (1708 bytes)
	I0827 23:52:11.270999 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 23:52:11.301884 1945499 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/1739715.pem --> /usr/share/ca-certificates/1739715.pem (1338 bytes)
	I0827 23:52:11.340253 1945499 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 23:52:11.374921 1945499 ssh_runner.go:195] Run: openssl version
	I0827 23:52:11.382455 1945499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17397152.pem && ln -fs /usr/share/ca-certificates/17397152.pem /etc/ssl/certs/17397152.pem"
	I0827 23:52:11.403246 1945499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17397152.pem
	I0827 23:52:11.408487 1945499 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 23:12 /usr/share/ca-certificates/17397152.pem
	I0827 23:52:11.408667 1945499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17397152.pem
	I0827 23:52:11.416133 1945499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17397152.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 23:52:11.434438 1945499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 23:52:11.449666 1945499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:52:11.455877 1945499 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:52:11.455977 1945499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:52:11.470500 1945499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 23:52:11.486992 1945499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1739715.pem && ln -fs /usr/share/ca-certificates/1739715.pem /etc/ssl/certs/1739715.pem"
	I0827 23:52:11.499904 1945499 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1739715.pem
	I0827 23:52:11.504265 1945499 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 23:12 /usr/share/ca-certificates/1739715.pem
	I0827 23:52:11.504363 1945499 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1739715.pem
	I0827 23:52:11.512653 1945499 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1739715.pem /etc/ssl/certs/51391683.0"
	I0827 23:52:11.522846 1945499 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 23:52:11.527136 1945499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0827 23:52:11.534681 1945499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0827 23:52:11.542478 1945499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0827 23:52:11.550123 1945499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0827 23:52:11.557935 1945499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0827 23:52:11.565422 1945499 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0827 23:52:11.572892 1945499 kubeadm.go:392] StartCluster: {Name:old-k8s-version-394049 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-394049 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:52:11.573000 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0827 23:52:11.573068 1945499 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 23:52:11.624547 1945499 cri.go:89] found id: "3c0492b681bf18809e9a23ab9a173d2d830618a5a4009118054601e45bfe2d62"
	I0827 23:52:11.624571 1945499 cri.go:89] found id: "575a6ee419e7fe10299e33d8b97f8c2598ad91a8fea4bdd2f0dd5e2db16ada9c"
	I0827 23:52:11.624583 1945499 cri.go:89] found id: "592dbdd737e878b0fe0ea4cea6b72f6e640f9c434b17d5af3d98a6c70210e42c"
	I0827 23:52:11.624588 1945499 cri.go:89] found id: "afa3d5bad6b52464ebc366db825a3bae7e5c7708a260053326c71f3b698cb205"
	I0827 23:52:11.624592 1945499 cri.go:89] found id: "ec54c116a9331e1e0344c99a787d2410df9e7415035a80a4727091fdd518c6d9"
	I0827 23:52:11.624595 1945499 cri.go:89] found id: "cb5a0544025d9eeba2b0613deeb98000ece1fd8d335ccd8307d6631b0c79b808"
	I0827 23:52:11.624599 1945499 cri.go:89] found id: "8ad8c60d925d8c127982d6c494b2944705246a4e1f900b216029c075b40579c3"
	I0827 23:52:11.624602 1945499 cri.go:89] found id: "30ef2c8817f233bf500df3120c006454ceca974e44a9d5b1ccb0d2f184c7a618"
	I0827 23:52:11.624606 1945499 cri.go:89] found id: ""
	I0827 23:52:11.624664 1945499 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0827 23:52:11.638953 1945499 cri.go:116] JSON = null
	W0827 23:52:11.639018 1945499 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0827 23:52:11.639090 1945499 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0827 23:52:11.649546 1945499 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0827 23:52:11.649570 1945499 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0827 23:52:11.649630 1945499 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0827 23:52:11.659080 1945499 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0827 23:52:11.659606 1945499 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-394049" does not appear in /home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:52:11.659736 1945499 kubeconfig.go:62] /home/jenkins/minikube-integration/19522-1734325/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-394049" cluster setting kubeconfig missing "old-k8s-version-394049" context setting]
	I0827 23:52:11.660051 1945499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/kubeconfig: {Name:mkbc2349839e7e640d3be8c9c9dabdbaf532417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:52:11.661816 1945499 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0827 23:52:11.674768 1945499 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0827 23:52:11.674814 1945499 kubeadm.go:597] duration metric: took 25.237547ms to restartPrimaryControlPlane
	I0827 23:52:11.674824 1945499 kubeadm.go:394] duration metric: took 101.98033ms to StartCluster
	I0827 23:52:11.674840 1945499 settings.go:142] acquiring lock: {Name:mk2abdfb376a9e7540e648c96e5aaa1709f13213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:52:11.674903 1945499 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:52:11.675604 1945499 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/kubeconfig: {Name:mkbc2349839e7e640d3be8c9c9dabdbaf532417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:52:11.675846 1945499 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0827 23:52:11.676242 1945499 config.go:182] Loaded profile config "old-k8s-version-394049": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0827 23:52:11.676205 1945499 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0827 23:52:11.676350 1945499 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-394049"
	I0827 23:52:11.676400 1945499 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-394049"
	W0827 23:52:11.676411 1945499 addons.go:243] addon storage-provisioner should already be in state true
	I0827 23:52:11.676436 1945499 host.go:66] Checking if "old-k8s-version-394049" exists ...
	I0827 23:52:11.676440 1945499 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-394049"
	I0827 23:52:11.676475 1945499 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-394049"
	I0827 23:52:11.676805 1945499 cli_runner.go:164] Run: docker container inspect old-k8s-version-394049 --format={{.State.Status}}
	I0827 23:52:11.676983 1945499 cli_runner.go:164] Run: docker container inspect old-k8s-version-394049 --format={{.State.Status}}
	I0827 23:52:11.677434 1945499 addons.go:69] Setting dashboard=true in profile "old-k8s-version-394049"
	I0827 23:52:11.677470 1945499 addons.go:234] Setting addon dashboard=true in "old-k8s-version-394049"
	W0827 23:52:11.677483 1945499 addons.go:243] addon dashboard should already be in state true
	I0827 23:52:11.677516 1945499 host.go:66] Checking if "old-k8s-version-394049" exists ...
	I0827 23:52:11.678090 1945499 cli_runner.go:164] Run: docker container inspect old-k8s-version-394049 --format={{.State.Status}}
	I0827 23:52:11.681782 1945499 out.go:177] * Verifying Kubernetes components...
	I0827 23:52:11.682073 1945499 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-394049"
	I0827 23:52:11.682106 1945499 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-394049"
	W0827 23:52:11.682113 1945499 addons.go:243] addon metrics-server should already be in state true
	I0827 23:52:11.682144 1945499 host.go:66] Checking if "old-k8s-version-394049" exists ...
	I0827 23:52:11.682599 1945499 cli_runner.go:164] Run: docker container inspect old-k8s-version-394049 --format={{.State.Status}}
	I0827 23:52:11.684128 1945499 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:52:11.749657 1945499 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 23:52:11.755515 1945499 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 23:52:11.755540 1945499 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0827 23:52:11.755608 1945499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-394049
	I0827 23:52:11.756467 1945499 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-394049"
	W0827 23:52:11.756492 1945499 addons.go:243] addon default-storageclass should already be in state true
	I0827 23:52:11.756518 1945499 host.go:66] Checking if "old-k8s-version-394049" exists ...
	I0827 23:52:11.757188 1945499 cli_runner.go:164] Run: docker container inspect old-k8s-version-394049 --format={{.State.Status}}
	I0827 23:52:11.768877 1945499 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0827 23:52:11.771212 1945499 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0827 23:52:11.771237 1945499 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0827 23:52:11.771321 1945499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-394049
	I0827 23:52:11.793690 1945499 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0827 23:52:11.803470 1945499 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0827 23:52:11.806971 1945499 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0827 23:52:11.807000 1945499 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0827 23:52:11.807091 1945499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-394049
	I0827 23:52:11.818644 1945499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/old-k8s-version-394049/id_rsa Username:docker}
	I0827 23:52:11.847776 1945499 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0827 23:52:11.847797 1945499 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0827 23:52:11.847860 1945499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-394049
	I0827 23:52:11.877908 1945499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/old-k8s-version-394049/id_rsa Username:docker}
	I0827 23:52:11.898946 1945499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/old-k8s-version-394049/id_rsa Username:docker}
	I0827 23:52:11.901219 1945499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/old-k8s-version-394049/id_rsa Username:docker}
	I0827 23:52:11.953651 1945499 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 23:52:11.973330 1945499 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-394049" to be "Ready" ...
	I0827 23:52:12.068719 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 23:52:12.167155 1945499 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0827 23:52:12.167224 1945499 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0827 23:52:12.211855 1945499 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0827 23:52:12.211933 1945499 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0827 23:52:12.234788 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0827 23:52:12.250910 1945499 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0827 23:52:12.250990 1945499 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0827 23:52:12.289874 1945499 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0827 23:52:12.289948 1945499 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0827 23:52:12.323195 1945499 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0827 23:52:12.323273 1945499 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0827 23:52:12.360133 1945499 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0827 23:52:12.360212 1945499 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0827 23:52:12.399277 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0827 23:52:12.434447 1945499 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0827 23:52:12.434517 1945499 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0827 23:52:12.492905 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:12.492987 1945499 retry.go:31] will retry after 339.947745ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0827 23:52:12.517632 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:12.517714 1945499 retry.go:31] will retry after 214.072246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:12.531424 1945499 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0827 23:52:12.531511 1945499 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0827 23:52:12.638876 1945499 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0827 23:52:12.638955 1945499 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0827 23:52:12.664267 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:12.664348 1945499 retry.go:31] will retry after 180.385878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:12.680384 1945499 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0827 23:52:12.680456 1945499 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0827 23:52:12.718075 1945499 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0827 23:52:12.718152 1945499 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0827 23:52:12.732537 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0827 23:52:12.762556 1945499 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0827 23:52:12.762635 1945499 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0827 23:52:12.834036 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 23:52:12.835196 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0827 23:52:12.845399 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0827 23:52:12.952511 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:12.952594 1945499 retry.go:31] will retry after 258.295816ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0827 23:52:13.207864 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:13.207944 1945499 retry.go:31] will retry after 405.723693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:13.211348 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0827 23:52:13.245938 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:13.246022 1945499 retry.go:31] will retry after 371.526595ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0827 23:52:13.246077 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:13.246129 1945499 retry.go:31] will retry after 362.91713ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0827 23:52:13.361730 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:13.361824 1945499 retry.go:31] will retry after 561.04576ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:13.609998 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0827 23:52:13.614441 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 23:52:13.617940 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0827 23:52:13.888738 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:13.888824 1945499 retry.go:31] will retry after 381.820515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0827 23:52:13.902025 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:13.902114 1945499 retry.go:31] will retry after 692.100352ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0827 23:52:13.923133 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:13.923227 1945499 retry.go:31] will retry after 528.298021ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:13.923354 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0827 23:52:13.974554 1945499 node_ready.go:53] error getting node "old-k8s-version-394049": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-394049": dial tcp 192.168.76.2:8443: connect: connection refused
	W0827 23:52:14.055808 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:14.055901 1945499 retry.go:31] will retry after 995.646912ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:14.271193 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0827 23:52:14.383025 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:14.383108 1945499 retry.go:31] will retry after 1.154942581s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:14.452213 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0827 23:52:14.569965 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:14.570010 1945499 retry.go:31] will retry after 549.99911ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:14.595318 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0827 23:52:14.703719 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:14.703782 1945499 retry.go:31] will retry after 468.596693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:15.052802 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0827 23:52:15.120380 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0827 23:52:15.172911 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0827 23:52:15.196771 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:15.196821 1945499 retry.go:31] will retry after 810.895713ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0827 23:52:15.305271 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:15.305313 1945499 retry.go:31] will retry after 905.254777ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0827 23:52:15.353566 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:15.353617 1945499 retry.go:31] will retry after 775.609276ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:15.538405 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0827 23:52:15.635746 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:15.635829 1945499 retry.go:31] will retry after 662.97292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:16.008589 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0827 23:52:16.125637 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:16.125673 1945499 retry.go:31] will retry after 2.412456098s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:16.129933 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 23:52:16.211301 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0827 23:52:16.229170 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:16.229208 1945499 retry.go:31] will retry after 2.114056106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:16.299508 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0827 23:52:16.362615 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:16.362660 1945499 retry.go:31] will retry after 1.074801879s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0827 23:52:16.458072 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:16.458106 1945499 retry.go:31] will retry after 2.100016875s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:16.474784 1945499 node_ready.go:53] error getting node "old-k8s-version-394049": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-394049": dial tcp 192.168.76.2:8443: connect: connection refused
	I0827 23:52:17.438515 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0827 23:52:17.547307 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:17.547338 1945499 retry.go:31] will retry after 2.011521984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:18.343977 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0827 23:52:18.423867 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:18.423901 1945499 retry.go:31] will retry after 3.473516436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:18.538607 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0827 23:52:18.558918 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0827 23:52:18.644605 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:18.644639 1945499 retry.go:31] will retry after 1.871618423s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0827 23:52:18.662701 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:18.662735 1945499 retry.go:31] will retry after 1.847809249s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:18.974543 1945499 node_ready.go:53] error getting node "old-k8s-version-394049": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-394049": dial tcp 192.168.76.2:8443: connect: connection refused
	I0827 23:52:19.559357 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0827 23:52:19.638731 1945499 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:19.638769 1945499 retry.go:31] will retry after 3.439371127s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0827 23:52:20.511669 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0827 23:52:20.517103 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0827 23:52:21.897782 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 23:52:23.078701 1945499 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0827 23:52:30.192687 1945499 node_ready.go:49] node "old-k8s-version-394049" has status "Ready":"True"
	I0827 23:52:30.192714 1945499 node_ready.go:38] duration metric: took 18.219345112s for node "old-k8s-version-394049" to be "Ready" ...
	I0827 23:52:30.192724 1945499 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:52:30.420629 1945499 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-fbhfc" in "kube-system" namespace to be "Ready" ...
	I0827 23:52:30.478056 1945499 pod_ready.go:93] pod "coredns-74ff55c5b-fbhfc" in "kube-system" namespace has status "Ready":"True"
	I0827 23:52:30.478124 1945499 pod_ready.go:82] duration metric: took 57.401877ms for pod "coredns-74ff55c5b-fbhfc" in "kube-system" namespace to be "Ready" ...
	I0827 23:52:30.478160 1945499 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-394049" in "kube-system" namespace to be "Ready" ...
	I0827 23:52:30.496628 1945499 pod_ready.go:93] pod "etcd-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"True"
	I0827 23:52:30.496710 1945499 pod_ready.go:82] duration metric: took 18.529328ms for pod "etcd-old-k8s-version-394049" in "kube-system" namespace to be "Ready" ...
	I0827 23:52:30.496741 1945499 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-394049" in "kube-system" namespace to be "Ready" ...
	I0827 23:52:31.145565 1945499 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.628403903s)
	I0827 23:52:31.145806 1945499 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.634085896s)
	I0827 23:52:31.145828 1945499 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-394049"
	I0827 23:52:31.210551 1945499 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.312731206s)
	I0827 23:52:31.444054 1945499 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.365301133s)
	I0827 23:52:31.446049 1945499 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-394049 addons enable metrics-server
	
	I0827 23:52:31.448067 1945499 out.go:177] * Enabled addons: metrics-server, default-storageclass, storage-provisioner, dashboard
	I0827 23:52:31.449830 1945499 addons.go:510] duration metric: took 19.773624726s for enable addons: enabled=[metrics-server default-storageclass storage-provisioner dashboard]
	I0827 23:52:32.504156 1945499 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:52:34.506126 1945499 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:52:37.009959 1945499 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:52:38.505199 1945499 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"True"
	I0827 23:52:38.505227 1945499 pod_ready.go:82] duration metric: took 8.008453426s for pod "kube-apiserver-old-k8s-version-394049" in "kube-system" namespace to be "Ready" ...
	I0827 23:52:38.505239 1945499 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace to be "Ready" ...
	I0827 23:52:40.511668 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:52:42.513025 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:52:45.048761 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:52:47.511787 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:52:49.512423 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:52:52.014455 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:52:54.016827 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:52:56.512467 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:52:58.513315 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:00.517979 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:03.022287 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:05.516715 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:08.017543 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:10.022881 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:12.511436 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:14.511515 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:16.511824 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:18.512595 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:21.017637 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:23.515888 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:25.523375 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:28.012706 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:30.147743 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:32.516483 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:35.017465 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:37.511824 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:39.514454 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:41.514541 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:43.519764 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:46.021968 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:48.023259 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:50.024458 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:52.511505 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:54.513571 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:57.022234 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:59.512743 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:00.513778 1945499 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"True"
	I0827 23:54:00.513813 1945499 pod_ready.go:82] duration metric: took 1m22.008565732s for pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace to be "Ready" ...
	I0827 23:54:00.513827 1945499 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d84wl" in "kube-system" namespace to be "Ready" ...
	I0827 23:54:00.522113 1945499 pod_ready.go:93] pod "kube-proxy-d84wl" in "kube-system" namespace has status "Ready":"True"
	I0827 23:54:00.522142 1945499 pod_ready.go:82] duration metric: took 8.306412ms for pod "kube-proxy-d84wl" in "kube-system" namespace to be "Ready" ...
	I0827 23:54:00.522155 1945499 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-394049" in "kube-system" namespace to be "Ready" ...
	I0827 23:54:00.528947 1945499 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"True"
	I0827 23:54:00.528983 1945499 pod_ready.go:82] duration metric: took 6.820274ms for pod "kube-scheduler-old-k8s-version-394049" in "kube-system" namespace to be "Ready" ...
	I0827 23:54:00.528997 1945499 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace to be "Ready" ...
	I0827 23:54:02.538013 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:05.049335 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:07.536856 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:09.544754 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:12.036784 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:14.037634 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:16.544787 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:19.036321 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:21.041602 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:23.535686 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:25.536146 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:27.540761 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:30.071722 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:32.538311 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:35.039074 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:37.535387 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:39.537746 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:41.542557 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:44.037226 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:46.535616 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:48.537805 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:50.542500 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:53.036589 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:55.068559 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:57.535050 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:59.536221 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:01.541466 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:04.036840 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:06.037668 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:08.540614 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:11.040996 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:13.536098 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:16.036958 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:18.037758 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:20.059068 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:22.542251 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:25.055999 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:27.549747 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:30.088038 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:32.535845 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:35.042825 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:37.537449 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:39.541413 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:42.036947 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:44.541623 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:47.037061 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:49.536624 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:51.540666 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:54.039877 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:56.041431 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:58.535820 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:00.569693 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:03.036146 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:05.063202 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:07.535319 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:09.542740 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:12.036240 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:14.041915 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:16.538564 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:18.539387 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:20.542941 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:23.035989 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:25.041259 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:27.535751 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:30.051150 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:32.541544 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:35.052137 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:37.541729 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:39.543135 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:42.044288 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:44.536100 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:47.036663 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:49.541661 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:52.041046 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:54.545514 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:57.037925 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:59.537151 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:01.541807 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:04.036301 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:06.037323 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:08.037761 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:10.061967 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:12.085447 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:14.549156 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:17.035779 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:19.036443 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:21.039024 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:23.536705 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:25.537439 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:28.037022 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:30.047408 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:32.535922 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:34.538361 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:37.040054 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:39.543065 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:41.543734 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:44.037557 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:46.535805 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:48.538154 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:51.036087 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:53.039962 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:55.050692 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:57.541257 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:58:00.089363 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:58:00.543555 1945499 pod_ready.go:82] duration metric: took 4m0.014542084s for pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace to be "Ready" ...
	E0827 23:58:00.543589 1945499 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0827 23:58:00.543600 1945499 pod_ready.go:39] duration metric: took 5m30.35086527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:58:00.543615 1945499 api_server.go:52] waiting for apiserver process to appear ...
	I0827 23:58:00.543647 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0827 23:58:00.543719 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0827 23:58:00.597945 1945499 cri.go:89] found id: "236ee37eeeb99bd1460f867c4e7fe387aa435f0c3062f69ba966a2912dcefd98"
	I0827 23:58:00.597976 1945499 cri.go:89] found id: "8ad8c60d925d8c127982d6c494b2944705246a4e1f900b216029c075b40579c3"
	I0827 23:58:00.597982 1945499 cri.go:89] found id: ""
	I0827 23:58:00.597990 1945499 logs.go:276] 2 containers: [236ee37eeeb99bd1460f867c4e7fe387aa435f0c3062f69ba966a2912dcefd98 8ad8c60d925d8c127982d6c494b2944705246a4e1f900b216029c075b40579c3]
	I0827 23:58:00.598054 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.602301 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.606401 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0827 23:58:00.606492 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0827 23:58:00.652901 1945499 cri.go:89] found id: "b840f973e99b93adc44783c2e2d337691055b2010c919612e3dadc0ed1482689"
	I0827 23:58:00.652924 1945499 cri.go:89] found id: "ec54c116a9331e1e0344c99a787d2410df9e7415035a80a4727091fdd518c6d9"
	I0827 23:58:00.652929 1945499 cri.go:89] found id: ""
	I0827 23:58:00.652937 1945499 logs.go:276] 2 containers: [b840f973e99b93adc44783c2e2d337691055b2010c919612e3dadc0ed1482689 ec54c116a9331e1e0344c99a787d2410df9e7415035a80a4727091fdd518c6d9]
	I0827 23:58:00.653001 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.657090 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.660704 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0827 23:58:00.660777 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0827 23:58:00.699407 1945499 cri.go:89] found id: "ead4d00fa7425edec7434788632e2318593bc3569ef3831b4dc8a50390cfcef7"
	I0827 23:58:00.699433 1945499 cri.go:89] found id: "3c0492b681bf18809e9a23ab9a173d2d830618a5a4009118054601e45bfe2d62"
	I0827 23:58:00.699438 1945499 cri.go:89] found id: ""
	I0827 23:58:00.699446 1945499 logs.go:276] 2 containers: [ead4d00fa7425edec7434788632e2318593bc3569ef3831b4dc8a50390cfcef7 3c0492b681bf18809e9a23ab9a173d2d830618a5a4009118054601e45bfe2d62]
	I0827 23:58:00.699516 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.703499 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.707751 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0827 23:58:00.707839 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0827 23:58:00.749347 1945499 cri.go:89] found id: "1a994ea8ba82f081ccc9e2ac0e483d50f83e2ed42aa614b79c8caa2103abf267"
	I0827 23:58:00.749421 1945499 cri.go:89] found id: "cb5a0544025d9eeba2b0613deeb98000ece1fd8d335ccd8307d6631b0c79b808"
	I0827 23:58:00.749433 1945499 cri.go:89] found id: ""
	I0827 23:58:00.749442 1945499 logs.go:276] 2 containers: [1a994ea8ba82f081ccc9e2ac0e483d50f83e2ed42aa614b79c8caa2103abf267 cb5a0544025d9eeba2b0613deeb98000ece1fd8d335ccd8307d6631b0c79b808]
	I0827 23:58:00.749515 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.753278 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.756794 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0827 23:58:00.756926 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0827 23:58:00.798299 1945499 cri.go:89] found id: "b794569e1af8dd0a1e24a3b37ce65bee8173206424ae64ce50ae15299bc2ce1e"
	I0827 23:58:00.798331 1945499 cri.go:89] found id: "afa3d5bad6b52464ebc366db825a3bae7e5c7708a260053326c71f3b698cb205"
	I0827 23:58:00.798337 1945499 cri.go:89] found id: ""
	I0827 23:58:00.798344 1945499 logs.go:276] 2 containers: [b794569e1af8dd0a1e24a3b37ce65bee8173206424ae64ce50ae15299bc2ce1e afa3d5bad6b52464ebc366db825a3bae7e5c7708a260053326c71f3b698cb205]
	I0827 23:58:00.798412 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.802454 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.806186 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0827 23:58:00.806284 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0827 23:58:00.849117 1945499 cri.go:89] found id: "b80f35939db8aaeb12827ab1e612ae18e95c0c024e797cd5b1ea4629fe4a70ea"
	I0827 23:58:00.849140 1945499 cri.go:89] found id: "30ef2c8817f233bf500df3120c006454ceca974e44a9d5b1ccb0d2f184c7a618"
	I0827 23:58:00.849145 1945499 cri.go:89] found id: ""
	I0827 23:58:00.849153 1945499 logs.go:276] 2 containers: [b80f35939db8aaeb12827ab1e612ae18e95c0c024e797cd5b1ea4629fe4a70ea 30ef2c8817f233bf500df3120c006454ceca974e44a9d5b1ccb0d2f184c7a618]
	I0827 23:58:00.849226 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.852721 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.856151 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0827 23:58:00.856227 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0827 23:58:00.899038 1945499 cri.go:89] found id: "1cbb985b30629df7e58845adea1be58296d1c4b309b10502e97ae37f80e864fd"
	I0827 23:58:00.899063 1945499 cri.go:89] found id: "575a6ee419e7fe10299e33d8b97f8c2598ad91a8fea4bdd2f0dd5e2db16ada9c"
	I0827 23:58:00.899068 1945499 cri.go:89] found id: ""
	I0827 23:58:00.899075 1945499 logs.go:276] 2 containers: [1cbb985b30629df7e58845adea1be58296d1c4b309b10502e97ae37f80e864fd 575a6ee419e7fe10299e33d8b97f8c2598ad91a8fea4bdd2f0dd5e2db16ada9c]
	I0827 23:58:00.899130 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.903019 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.907211 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0827 23:58:00.907319 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0827 23:58:00.953896 1945499 cri.go:89] found id: "d6670465175a28a741e1eadfb9ec891d36c454066af259c2ba1292e1c2d606d9"
	I0827 23:58:00.953964 1945499 cri.go:89] found id: ""
	I0827 23:58:00.953978 1945499 logs.go:276] 1 containers: [d6670465175a28a741e1eadfb9ec891d36c454066af259c2ba1292e1c2d606d9]
	I0827 23:58:00.954053 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.958271 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0827 23:58:00.958391 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0827 23:58:01.010042 1945499 cri.go:89] found id: "42597d6ccc6c90213fb2f50464c1373d136df2cc9496367789b03fba8d5f25bf"
	I0827 23:58:01.010071 1945499 cri.go:89] found id: "592dbdd737e878b0fe0ea4cea6b72f6e640f9c434b17d5af3d98a6c70210e42c"
	I0827 23:58:01.010077 1945499 cri.go:89] found id: ""
	I0827 23:58:01.010085 1945499 logs.go:276] 2 containers: [42597d6ccc6c90213fb2f50464c1373d136df2cc9496367789b03fba8d5f25bf 592dbdd737e878b0fe0ea4cea6b72f6e640f9c434b17d5af3d98a6c70210e42c]
	I0827 23:58:01.010157 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:01.015859 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:01.027889 1945499 logs.go:123] Gathering logs for kubelet ...
	I0827 23:58:01.027965 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0827 23:58:01.085733 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:29 old-k8s-version-394049 kubelet[661]: E0827 23:52:29.904499     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.085965 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:29 old-k8s-version-394049 kubelet[661]: E0827 23:52:29.905231     661 reflector.go:138] object-"kube-system"/"coredns-token-h2fzw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-h2fzw" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.086179 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:29 old-k8s-version-394049 kubelet[661]: E0827 23:52:29.905317     661 reflector.go:138] object-"kube-system"/"kindnet-token-nhzrn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nhzrn" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.089944 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:30 old-k8s-version-394049 kubelet[661]: E0827 23:52:30.162229     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-4fdqz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-4fdqz" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.090154 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:30 old-k8s-version-394049 kubelet[661]: E0827 23:52:30.162511     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.090397 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:30 old-k8s-version-394049 kubelet[661]: E0827 23:52:30.162590     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-dhs5r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-dhs5r" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.090605 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:30 old-k8s-version-394049 kubelet[661]: E0827 23:52:30.162658     661 reflector.go:138] object-"default"/"default-token-bdzp7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-bdzp7" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.090825 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:30 old-k8s-version-394049 kubelet[661]: E0827 23:52:30.162723     661 reflector.go:138] object-"kube-system"/"metrics-server-token-nrslw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-nrslw" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.099068 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:34 old-k8s-version-394049 kubelet[661]: E0827 23:52:34.053654     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0827 23:58:01.099263 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:34 old-k8s-version-394049 kubelet[661]: E0827 23:52:34.649560     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.102070 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:46 old-k8s-version-394049 kubelet[661]: E0827 23:52:46.414539     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0827 23:58:01.102400 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:48 old-k8s-version-394049 kubelet[661]: E0827 23:52:48.165741     661 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-sw62f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-sw62f" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.105519 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:59 old-k8s-version-394049 kubelet[661]: E0827 23:52:59.405677     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.106456 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:00 old-k8s-version-394049 kubelet[661]: E0827 23:53:00.763341     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.106810 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:01 old-k8s-version-394049 kubelet[661]: E0827 23:53:01.767516     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.107142 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:02 old-k8s-version-394049 kubelet[661]: E0827 23:53:02.770141     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.109947 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:10 old-k8s-version-394049 kubelet[661]: E0827 23:53:10.412938     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0827 23:58:01.110536 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:15 old-k8s-version-394049 kubelet[661]: E0827 23:53:15.810709     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.110863 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:21 old-k8s-version-394049 kubelet[661]: E0827 23:53:21.214756     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.111048 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:23 old-k8s-version-394049 kubelet[661]: E0827 23:53:23.403844     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.111379 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:32 old-k8s-version-394049 kubelet[661]: E0827 23:53:32.403201     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.111565 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:35 old-k8s-version-394049 kubelet[661]: E0827 23:53:35.404624     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.111882 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:47 old-k8s-version-394049 kubelet[661]: E0827 23:53:47.404977     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.112337 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:47 old-k8s-version-394049 kubelet[661]: E0827 23:53:47.913818     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.112668 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:51 old-k8s-version-394049 kubelet[661]: E0827 23:53:51.214411     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.115132 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:59 old-k8s-version-394049 kubelet[661]: E0827 23:53:59.420594     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0827 23:58:01.115465 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:03 old-k8s-version-394049 kubelet[661]: E0827 23:54:03.408451     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.115649 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:13 old-k8s-version-394049 kubelet[661]: E0827 23:54:13.404006     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.115974 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:16 old-k8s-version-394049 kubelet[661]: E0827 23:54:16.404599     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.116158 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:26 old-k8s-version-394049 kubelet[661]: E0827 23:54:26.403617     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.116755 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:32 old-k8s-version-394049 kubelet[661]: E0827 23:54:32.101869     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.116939 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:40 old-k8s-version-394049 kubelet[661]: E0827 23:54:40.403413     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.117265 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:41 old-k8s-version-394049 kubelet[661]: E0827 23:54:41.214563     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.117581 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:54 old-k8s-version-394049 kubelet[661]: E0827 23:54:54.403985     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.117777 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:54 old-k8s-version-394049 kubelet[661]: E0827 23:54:54.404224     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.117960 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:05 old-k8s-version-394049 kubelet[661]: E0827 23:55:05.403501     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.118284 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:08 old-k8s-version-394049 kubelet[661]: E0827 23:55:08.403160     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.118468 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:17 old-k8s-version-394049 kubelet[661]: E0827 23:55:17.404616     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.118793 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:21 old-k8s-version-394049 kubelet[661]: E0827 23:55:21.403616     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.121238 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:29 old-k8s-version-394049 kubelet[661]: E0827 23:55:29.412117     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0827 23:58:01.121567 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:34 old-k8s-version-394049 kubelet[661]: E0827 23:55:34.403137     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.121750 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:43 old-k8s-version-394049 kubelet[661]: E0827 23:55:43.403788     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.122077 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:49 old-k8s-version-394049 kubelet[661]: E0827 23:55:49.404105     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.122261 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:58 old-k8s-version-394049 kubelet[661]: E0827 23:55:58.403447     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.122851 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:01 old-k8s-version-394049 kubelet[661]: E0827 23:56:01.416088     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.123034 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:09 old-k8s-version-394049 kubelet[661]: E0827 23:56:09.404354     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.123360 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:11 old-k8s-version-394049 kubelet[661]: E0827 23:56:11.216894     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.123547 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:23 old-k8s-version-394049 kubelet[661]: E0827 23:56:23.403678     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.123877 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:24 old-k8s-version-394049 kubelet[661]: E0827 23:56:24.403435     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.124062 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:38 old-k8s-version-394049 kubelet[661]: E0827 23:56:38.403581     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.124534 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:39 old-k8s-version-394049 kubelet[661]: E0827 23:56:39.403644     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.124726 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:50 old-k8s-version-394049 kubelet[661]: E0827 23:56:50.403803     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.125058 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:54 old-k8s-version-394049 kubelet[661]: E0827 23:56:54.403100     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.125243 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:03 old-k8s-version-394049 kubelet[661]: E0827 23:57:03.403600     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.125572 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:09 old-k8s-version-394049 kubelet[661]: E0827 23:57:09.403474     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.125757 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:17 old-k8s-version-394049 kubelet[661]: E0827 23:57:17.403665     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.126082 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:22 old-k8s-version-394049 kubelet[661]: E0827 23:57:22.403246     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.126268 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:30 old-k8s-version-394049 kubelet[661]: E0827 23:57:30.403646     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.126596 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:36 old-k8s-version-394049 kubelet[661]: E0827 23:57:36.403163     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.126782 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:43 old-k8s-version-394049 kubelet[661]: E0827 23:57:43.403668     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.127136 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:51 old-k8s-version-394049 kubelet[661]: E0827 23:57:51.403157     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.127321 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:57 old-k8s-version-394049 kubelet[661]: E0827 23:57:57.404181     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0827 23:58:01.127331 1945499 logs.go:123] Gathering logs for etcd [ec54c116a9331e1e0344c99a787d2410df9e7415035a80a4727091fdd518c6d9] ...
	I0827 23:58:01.127346 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec54c116a9331e1e0344c99a787d2410df9e7415035a80a4727091fdd518c6d9"
	I0827 23:58:01.173990 1945499 logs.go:123] Gathering logs for kube-controller-manager [30ef2c8817f233bf500df3120c006454ceca974e44a9d5b1ccb0d2f184c7a618] ...
	I0827 23:58:01.174026 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ef2c8817f233bf500df3120c006454ceca974e44a9d5b1ccb0d2f184c7a618"
	I0827 23:58:01.230616 1945499 logs.go:123] Gathering logs for kindnet [1cbb985b30629df7e58845adea1be58296d1c4b309b10502e97ae37f80e864fd] ...
	I0827 23:58:01.230650 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cbb985b30629df7e58845adea1be58296d1c4b309b10502e97ae37f80e864fd"
	I0827 23:58:01.275585 1945499 logs.go:123] Gathering logs for storage-provisioner [592dbdd737e878b0fe0ea4cea6b72f6e640f9c434b17d5af3d98a6c70210e42c] ...
	I0827 23:58:01.275621 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 592dbdd737e878b0fe0ea4cea6b72f6e640f9c434b17d5af3d98a6c70210e42c"
	I0827 23:58:01.321719 1945499 logs.go:123] Gathering logs for kubernetes-dashboard [d6670465175a28a741e1eadfb9ec891d36c454066af259c2ba1292e1c2d606d9] ...
	I0827 23:58:01.321752 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6670465175a28a741e1eadfb9ec891d36c454066af259c2ba1292e1c2d606d9"
	I0827 23:58:01.367943 1945499 logs.go:123] Gathering logs for dmesg ...
	I0827 23:58:01.367974 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 23:58:01.385565 1945499 logs.go:123] Gathering logs for describe nodes ...
	I0827 23:58:01.385595 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 23:58:01.542426 1945499 logs.go:123] Gathering logs for etcd [b840f973e99b93adc44783c2e2d337691055b2010c919612e3dadc0ed1482689] ...
	I0827 23:58:01.542461 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b840f973e99b93adc44783c2e2d337691055b2010c919612e3dadc0ed1482689"
	I0827 23:58:01.586413 1945499 logs.go:123] Gathering logs for kube-scheduler [cb5a0544025d9eeba2b0613deeb98000ece1fd8d335ccd8307d6631b0c79b808] ...
	I0827 23:58:01.586445 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb5a0544025d9eeba2b0613deeb98000ece1fd8d335ccd8307d6631b0c79b808"
	I0827 23:58:01.634391 1945499 logs.go:123] Gathering logs for kube-proxy [b794569e1af8dd0a1e24a3b37ce65bee8173206424ae64ce50ae15299bc2ce1e] ...
	I0827 23:58:01.634426 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794569e1af8dd0a1e24a3b37ce65bee8173206424ae64ce50ae15299bc2ce1e"
	I0827 23:58:01.674602 1945499 logs.go:123] Gathering logs for kube-controller-manager [b80f35939db8aaeb12827ab1e612ae18e95c0c024e797cd5b1ea4629fe4a70ea] ...
	I0827 23:58:01.674642 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b80f35939db8aaeb12827ab1e612ae18e95c0c024e797cd5b1ea4629fe4a70ea"
	I0827 23:58:01.756052 1945499 logs.go:123] Gathering logs for kindnet [575a6ee419e7fe10299e33d8b97f8c2598ad91a8fea4bdd2f0dd5e2db16ada9c] ...
	I0827 23:58:01.756149 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 575a6ee419e7fe10299e33d8b97f8c2598ad91a8fea4bdd2f0dd5e2db16ada9c"
	I0827 23:58:01.829677 1945499 logs.go:123] Gathering logs for containerd ...
	I0827 23:58:01.829716 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0827 23:58:01.893566 1945499 logs.go:123] Gathering logs for kube-apiserver [236ee37eeeb99bd1460f867c4e7fe387aa435f0c3062f69ba966a2912dcefd98] ...
	I0827 23:58:01.893606 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236ee37eeeb99bd1460f867c4e7fe387aa435f0c3062f69ba966a2912dcefd98"
	I0827 23:58:01.975220 1945499 logs.go:123] Gathering logs for coredns [ead4d00fa7425edec7434788632e2318593bc3569ef3831b4dc8a50390cfcef7] ...
	I0827 23:58:01.975254 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ead4d00fa7425edec7434788632e2318593bc3569ef3831b4dc8a50390cfcef7"
	I0827 23:58:02.039338 1945499 logs.go:123] Gathering logs for coredns [3c0492b681bf18809e9a23ab9a173d2d830618a5a4009118054601e45bfe2d62] ...
	I0827 23:58:02.039366 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c0492b681bf18809e9a23ab9a173d2d830618a5a4009118054601e45bfe2d62"
	I0827 23:58:02.085786 1945499 logs.go:123] Gathering logs for kube-scheduler [1a994ea8ba82f081ccc9e2ac0e483d50f83e2ed42aa614b79c8caa2103abf267] ...
	I0827 23:58:02.085819 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a994ea8ba82f081ccc9e2ac0e483d50f83e2ed42aa614b79c8caa2103abf267"
	I0827 23:58:02.132044 1945499 logs.go:123] Gathering logs for kube-proxy [afa3d5bad6b52464ebc366db825a3bae7e5c7708a260053326c71f3b698cb205] ...
	I0827 23:58:02.132073 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afa3d5bad6b52464ebc366db825a3bae7e5c7708a260053326c71f3b698cb205"
	I0827 23:58:02.179174 1945499 logs.go:123] Gathering logs for storage-provisioner [42597d6ccc6c90213fb2f50464c1373d136df2cc9496367789b03fba8d5f25bf] ...
	I0827 23:58:02.179207 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42597d6ccc6c90213fb2f50464c1373d136df2cc9496367789b03fba8d5f25bf"
	I0827 23:58:02.220200 1945499 logs.go:123] Gathering logs for container status ...
	I0827 23:58:02.220234 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 23:58:02.269966 1945499 logs.go:123] Gathering logs for kube-apiserver [8ad8c60d925d8c127982d6c494b2944705246a4e1f900b216029c075b40579c3] ...
	I0827 23:58:02.269997 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ad8c60d925d8c127982d6c494b2944705246a4e1f900b216029c075b40579c3"
	I0827 23:58:02.326500 1945499 out.go:358] Setting ErrFile to fd 2...
	I0827 23:58:02.326532 1945499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0827 23:58:02.326594 1945499 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0827 23:58:02.326608 1945499 out.go:270]   Aug 27 23:57:30 old-k8s-version-394049 kubelet[661]: E0827 23:57:30.403646     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 27 23:57:30 old-k8s-version-394049 kubelet[661]: E0827 23:57:30.403646     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:02.326640 1945499 out.go:270]   Aug 27 23:57:36 old-k8s-version-394049 kubelet[661]: E0827 23:57:36.403163     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	  Aug 27 23:57:36 old-k8s-version-394049 kubelet[661]: E0827 23:57:36.403163     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:02.326647 1945499 out.go:270]   Aug 27 23:57:43 old-k8s-version-394049 kubelet[661]: E0827 23:57:43.403668     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 27 23:57:43 old-k8s-version-394049 kubelet[661]: E0827 23:57:43.403668     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:02.326656 1945499 out.go:270]   Aug 27 23:57:51 old-k8s-version-394049 kubelet[661]: E0827 23:57:51.403157     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	  Aug 27 23:57:51 old-k8s-version-394049 kubelet[661]: E0827 23:57:51.403157     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:02.326661 1945499 out.go:270]   Aug 27 23:57:57 old-k8s-version-394049 kubelet[661]: E0827 23:57:57.404181     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 27 23:57:57 old-k8s-version-394049 kubelet[661]: E0827 23:57:57.404181     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0827 23:58:02.326666 1945499 out.go:358] Setting ErrFile to fd 2...
	I0827 23:58:02.326675 1945499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:58:12.327421 1945499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:58:12.341561 1945499 api_server.go:72] duration metric: took 6m0.665672398s to wait for apiserver process to appear ...
	I0827 23:58:12.341586 1945499 api_server.go:88] waiting for apiserver healthz status ...
	I0827 23:58:12.344259 1945499 out.go:201] 
	W0827 23:58:12.346377 1945499 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0827 23:58:12.346407 1945499 out.go:270] * 
	* 
	W0827 23:58:12.347367 1945499 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 23:58:12.348889 1945499 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-394049 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-394049
helpers_test.go:235: (dbg) docker inspect old-k8s-version-394049:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "95f46ad71ee383734cf3718f7f3f3c15529e0294aea3acb28607ef3943f9dc3e",
	        "Created": "2024-08-27T23:48:50.60570392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1945714,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-27T23:52:03.421800756Z",
	            "FinishedAt": "2024-08-27T23:52:02.094063653Z"
	        },
	        "Image": "sha256:0985147309945253cbe7e881ef8b47b2eeae8c92bbeecfbcb5398ea2f50c97c6",
	        "ResolvConfPath": "/var/lib/docker/containers/95f46ad71ee383734cf3718f7f3f3c15529e0294aea3acb28607ef3943f9dc3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/95f46ad71ee383734cf3718f7f3f3c15529e0294aea3acb28607ef3943f9dc3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/95f46ad71ee383734cf3718f7f3f3c15529e0294aea3acb28607ef3943f9dc3e/hosts",
	        "LogPath": "/var/lib/docker/containers/95f46ad71ee383734cf3718f7f3f3c15529e0294aea3acb28607ef3943f9dc3e/95f46ad71ee383734cf3718f7f3f3c15529e0294aea3acb28607ef3943f9dc3e-json.log",
	        "Name": "/old-k8s-version-394049",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-394049:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-394049",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/312741932f25c812f3ef031224fd54ab571f33519a8cfc67991c3cc22eae3f76-init/diff:/var/lib/docker/overlay2/dff060cd4e9382e758ba60bffaaeeca22b78e3466a4ecd4887c9950dd9c3672c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/312741932f25c812f3ef031224fd54ab571f33519a8cfc67991c3cc22eae3f76/merged",
	                "UpperDir": "/var/lib/docker/overlay2/312741932f25c812f3ef031224fd54ab571f33519a8cfc67991c3cc22eae3f76/diff",
	                "WorkDir": "/var/lib/docker/overlay2/312741932f25c812f3ef031224fd54ab571f33519a8cfc67991c3cc22eae3f76/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-394049",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-394049/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-394049",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-394049",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-394049",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aafdc2c27186ccdb5b4e3afd3038eb432212d51a8a52a1aac71e0cf9d9b6a496",
	            "SandboxKey": "/var/run/docker/netns/aafdc2c27186",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-394049": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "c65e05bdf567481634a63e348351321d4d24583b856e3c37a9287645a0b3a702",
	                    "EndpointID": "5f13ada3a7ab726d5ed8794c17bae6be7ec67bcee82a92c1226a89162ab07fbf",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-394049",
	                        "95f46ad71ee3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-394049 -n old-k8s-version-394049
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-394049 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-394049 logs -n 25: (2.818206733s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | force-systemd-flag-036455                              | force-systemd-flag-036455 | jenkins | v1.33.1 | 27 Aug 24 23:47 UTC | 27 Aug 24 23:47 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-036455                           | force-systemd-flag-036455 | jenkins | v1.33.1 | 27 Aug 24 23:47 UTC | 27 Aug 24 23:47 UTC |
	| start   | -p cert-expiration-303453                              | cert-expiration-303453    | jenkins | v1.33.1 | 27 Aug 24 23:47 UTC | 27 Aug 24 23:48 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-908909                               | force-systemd-env-908909  | jenkins | v1.33.1 | 27 Aug 24 23:48 UTC | 27 Aug 24 23:48 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-908909                            | force-systemd-env-908909  | jenkins | v1.33.1 | 27 Aug 24 23:48 UTC | 27 Aug 24 23:48 UTC |
	| start   | -p cert-options-806650                                 | cert-options-806650       | jenkins | v1.33.1 | 27 Aug 24 23:48 UTC | 27 Aug 24 23:48 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-806650 ssh                                | cert-options-806650       | jenkins | v1.33.1 | 27 Aug 24 23:48 UTC | 27 Aug 24 23:48 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-806650 -- sudo                         | cert-options-806650       | jenkins | v1.33.1 | 27 Aug 24 23:48 UTC | 27 Aug 24 23:48 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-806650                                 | cert-options-806650       | jenkins | v1.33.1 | 27 Aug 24 23:48 UTC | 27 Aug 24 23:48 UTC |
	| start   | -p old-k8s-version-394049                              | old-k8s-version-394049    | jenkins | v1.33.1 | 27 Aug 24 23:48 UTC | 27 Aug 24 23:51 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-303453                              | cert-expiration-303453    | jenkins | v1.33.1 | 27 Aug 24 23:51 UTC | 27 Aug 24 23:51 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-303453                              | cert-expiration-303453    | jenkins | v1.33.1 | 27 Aug 24 23:51 UTC | 27 Aug 24 23:51 UTC |
	| start   | -p no-preload-710826                                   | no-preload-710826         | jenkins | v1.33.1 | 27 Aug 24 23:51 UTC | 27 Aug 24 23:52 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-394049        | old-k8s-version-394049    | jenkins | v1.33.1 | 27 Aug 24 23:51 UTC | 27 Aug 24 23:51 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-394049                              | old-k8s-version-394049    | jenkins | v1.33.1 | 27 Aug 24 23:51 UTC | 27 Aug 24 23:52 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-394049             | old-k8s-version-394049    | jenkins | v1.33.1 | 27 Aug 24 23:52 UTC | 27 Aug 24 23:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-394049                              | old-k8s-version-394049    | jenkins | v1.33.1 | 27 Aug 24 23:52 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-710826             | no-preload-710826         | jenkins | v1.33.1 | 27 Aug 24 23:53 UTC | 27 Aug 24 23:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-710826                                   | no-preload-710826         | jenkins | v1.33.1 | 27 Aug 24 23:53 UTC | 27 Aug 24 23:53 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-710826                  | no-preload-710826         | jenkins | v1.33.1 | 27 Aug 24 23:53 UTC | 27 Aug 24 23:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-710826                                   | no-preload-710826         | jenkins | v1.33.1 | 27 Aug 24 23:53 UTC | 27 Aug 24 23:57 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                           |         |         |                     |                     |
	| image   | no-preload-710826 image list                           | no-preload-710826         | jenkins | v1.33.1 | 27 Aug 24 23:58 UTC | 27 Aug 24 23:58 UTC |
	|         | --format=json                                          |                           |         |         |                     |                     |
	| pause   | -p no-preload-710826                                   | no-preload-710826         | jenkins | v1.33.1 | 27 Aug 24 23:58 UTC | 27 Aug 24 23:58 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| unpause | -p no-preload-710826                                   | no-preload-710826         | jenkins | v1.33.1 | 27 Aug 24 23:58 UTC | 27 Aug 24 23:58 UTC |
	|         | --alsologtostderr -v=1                                 |                           |         |         |                     |                     |
	| delete  | -p no-preload-710826                                   | no-preload-710826         | jenkins | v1.33.1 | 27 Aug 24 23:58 UTC |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 23:53:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 23:53:17.696618 1950495 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:53:17.696971 1950495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:53:17.696984 1950495 out.go:358] Setting ErrFile to fd 2...
	I0827 23:53:17.696991 1950495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:53:17.697398 1950495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
	I0827 23:53:17.697974 1950495 out.go:352] Setting JSON to false
	I0827 23:53:17.699165 1950495 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":27347,"bootTime":1724775451,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0827 23:53:17.699308 1950495 start.go:139] virtualization:  
	I0827 23:53:17.701896 1950495 out.go:177] * [no-preload-710826] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0827 23:53:17.704232 1950495 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 23:53:17.704361 1950495 notify.go:220] Checking for updates...
	I0827 23:53:17.707807 1950495 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:53:17.709529 1950495 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:53:17.711267 1950495 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	I0827 23:53:17.713024 1950495 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0827 23:53:17.714453 1950495 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 23:53:14.511515 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:16.511824 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:17.716908 1950495 config.go:182] Loaded profile config "no-preload-710826": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:53:17.717562 1950495 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:53:17.748480 1950495 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 23:53:17.748613 1950495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:53:17.804552 1950495 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-27 23:53:17.794314673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:53:17.804662 1950495 docker.go:307] overlay module found
	I0827 23:53:17.806481 1950495 out.go:177] * Using the docker driver based on existing profile
	I0827 23:53:17.808323 1950495 start.go:297] selected driver: docker
	I0827 23:53:17.808345 1950495 start.go:901] validating driver "docker" against &{Name:no-preload-710826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-710826 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:53:17.808551 1950495 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 23:53:17.809227 1950495 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:53:17.861323 1950495 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-27 23:53:17.852134116 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:53:17.861688 1950495 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 23:53:17.861759 1950495 cni.go:84] Creating CNI manager for ""
	I0827 23:53:17.861774 1950495 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0827 23:53:17.861819 1950495 start.go:340] cluster config:
	{Name:no-preload-710826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-710826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:53:17.864660 1950495 out.go:177] * Starting "no-preload-710826" primary control-plane node in "no-preload-710826" cluster
	I0827 23:53:17.866389 1950495 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0827 23:53:17.868154 1950495 out.go:177] * Pulling base image v0.0.44-1724667927-19511 ...
	I0827 23:53:17.869845 1950495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0827 23:53:17.869940 1950495 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local docker daemon
	I0827 23:53:17.869996 1950495 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/config.json ...
	I0827 23:53:17.870312 1950495 cache.go:107] acquiring lock: {Name:mka8ce19387b8ecc7182a163d9ad327ce657905d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:53:17.870396 1950495 cache.go:115] /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0827 23:53:17.870410 1950495 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 104.474µs
	I0827 23:53:17.870419 1950495 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0827 23:53:17.870435 1950495 cache.go:107] acquiring lock: {Name:mk15630c99494f266f208008f7a1863d161236cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:53:17.870470 1950495 cache.go:115] /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 exists
	I0827 23:53:17.870479 1950495 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.0" -> "/home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0" took 45.726µs
	I0827 23:53:17.870486 1950495 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.0 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.0 succeeded
	I0827 23:53:17.870500 1950495 cache.go:107] acquiring lock: {Name:mk99ed6b473bafdb0df02dcac7f9298cea0b393e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:53:17.870530 1950495 cache.go:115] /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 exists
	I0827 23:53:17.870539 1950495 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.0" -> "/home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0" took 40.319µs
	I0827 23:53:17.870546 1950495 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.0 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.0 succeeded
	I0827 23:53:17.870555 1950495 cache.go:107] acquiring lock: {Name:mk1e52fef9af06805c4147280fd611fd98eb0833 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:53:17.870585 1950495 cache.go:115] /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 exists
	I0827 23:53:17.870593 1950495 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.0" -> "/home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0" took 36.512µs
	I0827 23:53:17.870650 1950495 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.0 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.0 succeeded
	I0827 23:53:17.870662 1950495 cache.go:107] acquiring lock: {Name:mk15121a6c6e9f015b380981da645a9205936b44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:53:17.870725 1950495 cache.go:115] /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 exists
	I0827 23:53:17.870735 1950495 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.0" -> "/home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0" took 74.288µs
	I0827 23:53:17.870741 1950495 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.0 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.0 succeeded
	I0827 23:53:17.870761 1950495 cache.go:107] acquiring lock: {Name:mk9b39936b51bf5908de462463cfaf08be2cf7c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:53:17.870806 1950495 cache.go:115] /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0827 23:53:17.870811 1950495 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 52.266µs
	I0827 23:53:17.870818 1950495 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0827 23:53:17.870806 1950495 cache.go:107] acquiring lock: {Name:mk9547ce0efe22854f48ac74b06eac65b4021bad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:53:17.870827 1950495 cache.go:107] acquiring lock: {Name:mk8a8775799538f1dc113727b7d3adfdd7b0bc94 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:53:17.870856 1950495 cache.go:115] /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0827 23:53:17.870861 1950495 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 35.15µs
	I0827 23:53:17.870866 1950495 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0827 23:53:17.870868 1950495 cache.go:115] /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0827 23:53:17.870876 1950495 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 79.112µs
	I0827 23:53:17.870884 1950495 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0827 23:53:17.870904 1950495 cache.go:87] Successfully saved all images to host disk.
	W0827 23:53:17.889844 1950495 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 is of wrong architecture
	I0827 23:53:17.889864 1950495 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 to local cache
	I0827 23:53:17.889966 1950495 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory
	I0827 23:53:17.889988 1950495 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory, skipping pull
	I0827 23:53:17.889993 1950495 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 exists in cache, skipping pull
	I0827 23:53:17.890006 1950495 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 as a tarball
	I0827 23:53:17.890016 1950495 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 from local cache
	I0827 23:53:18.056070 1950495 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 from cached tarball
	I0827 23:53:18.056111 1950495 cache.go:194] Successfully downloaded all kic artifacts
	I0827 23:53:18.056153 1950495 start.go:360] acquireMachinesLock for no-preload-710826: {Name:mkb88246583f0ba141c380b10048d080b70c487f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0827 23:53:18.056224 1950495 start.go:364] duration metric: took 52.561µs to acquireMachinesLock for "no-preload-710826"
	I0827 23:53:18.056249 1950495 start.go:96] Skipping create...Using existing machine configuration
	I0827 23:53:18.056255 1950495 fix.go:54] fixHost starting: 
	I0827 23:53:18.056671 1950495 cli_runner.go:164] Run: docker container inspect no-preload-710826 --format={{.State.Status}}
	I0827 23:53:18.074518 1950495 fix.go:112] recreateIfNeeded on no-preload-710826: state=Stopped err=<nil>
	W0827 23:53:18.074556 1950495 fix.go:138] unexpected machine state, will restart: <nil>
	I0827 23:53:18.077957 1950495 out.go:177] * Restarting existing docker container for "no-preload-710826" ...
	I0827 23:53:18.079563 1950495 cli_runner.go:164] Run: docker start no-preload-710826
	I0827 23:53:18.394828 1950495 cli_runner.go:164] Run: docker container inspect no-preload-710826 --format={{.State.Status}}
	I0827 23:53:18.420306 1950495 kic.go:430] container "no-preload-710826" state is running.
	I0827 23:53:18.420833 1950495 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-710826
	I0827 23:53:18.445733 1950495 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/config.json ...
	I0827 23:53:18.445969 1950495 machine.go:93] provisionDockerMachine start ...
	I0827 23:53:18.446031 1950495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-710826
	I0827 23:53:18.469360 1950495 main.go:141] libmachine: Using SSH client type: native
	I0827 23:53:18.469632 1950495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I0827 23:53:18.469641 1950495 main.go:141] libmachine: About to run SSH command:
	hostname
	I0827 23:53:18.470295 1950495 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43900->127.0.0.1:33834: read: connection reset by peer
	I0827 23:53:21.616073 1950495 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-710826
	
	I0827 23:53:21.616099 1950495 ubuntu.go:169] provisioning hostname "no-preload-710826"
	I0827 23:53:21.616164 1950495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-710826
	I0827 23:53:21.635732 1950495 main.go:141] libmachine: Using SSH client type: native
	I0827 23:53:21.635998 1950495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I0827 23:53:21.636014 1950495 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-710826 && echo "no-preload-710826" | sudo tee /etc/hostname
	I0827 23:53:21.797426 1950495 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-710826
	
	I0827 23:53:21.797528 1950495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-710826
	I0827 23:53:21.814439 1950495 main.go:141] libmachine: Using SSH client type: native
	I0827 23:53:21.814703 1950495 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I0827 23:53:21.814726 1950495 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-710826' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-710826/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-710826' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0827 23:53:21.960692 1950495 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0827 23:53:21.960723 1950495 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19522-1734325/.minikube CaCertPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19522-1734325/.minikube}
	I0827 23:53:21.960748 1950495 ubuntu.go:177] setting up certificates
	I0827 23:53:21.960761 1950495 provision.go:84] configureAuth start
	I0827 23:53:21.960825 1950495 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-710826
	I0827 23:53:21.978419 1950495 provision.go:143] copyHostCerts
	I0827 23:53:21.978493 1950495 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.pem, removing ...
	I0827 23:53:21.978506 1950495 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.pem
	I0827 23:53:21.978582 1950495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.pem (1078 bytes)
	I0827 23:53:21.978696 1950495 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-1734325/.minikube/cert.pem, removing ...
	I0827 23:53:21.978707 1950495 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-1734325/.minikube/cert.pem
	I0827 23:53:21.978736 1950495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19522-1734325/.minikube/cert.pem (1123 bytes)
	I0827 23:53:21.978802 1950495 exec_runner.go:144] found /home/jenkins/minikube-integration/19522-1734325/.minikube/key.pem, removing ...
	I0827 23:53:21.978810 1950495 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19522-1734325/.minikube/key.pem
	I0827 23:53:21.978836 1950495 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19522-1734325/.minikube/key.pem (1675 bytes)
	I0827 23:53:21.978902 1950495 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca-key.pem org=jenkins.no-preload-710826 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-710826]
	I0827 23:53:22.525420 1950495 provision.go:177] copyRemoteCerts
	I0827 23:53:22.525490 1950495 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0827 23:53:22.525534 1950495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-710826
	I0827 23:53:22.543636 1950495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/no-preload-710826/id_rsa Username:docker}
	I0827 23:53:22.645806 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0827 23:53:22.673205 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0827 23:53:18.512595 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:21.017637 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:22.699115 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0827 23:53:22.725459 1950495 provision.go:87] duration metric: took 764.682951ms to configureAuth
	I0827 23:53:22.725487 1950495 ubuntu.go:193] setting minikube options for container-runtime
	I0827 23:53:22.725708 1950495 config.go:182] Loaded profile config "no-preload-710826": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:53:22.725715 1950495 machine.go:96] duration metric: took 4.279738288s to provisionDockerMachine
	I0827 23:53:22.725723 1950495 start.go:293] postStartSetup for "no-preload-710826" (driver="docker")
	I0827 23:53:22.725733 1950495 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0827 23:53:22.725787 1950495 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0827 23:53:22.725829 1950495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-710826
	I0827 23:53:22.743313 1950495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/no-preload-710826/id_rsa Username:docker}
	I0827 23:53:22.845745 1950495 ssh_runner.go:195] Run: cat /etc/os-release
	I0827 23:53:22.849014 1950495 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0827 23:53:22.849061 1950495 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0827 23:53:22.849072 1950495 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0827 23:53:22.849096 1950495 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0827 23:53:22.849112 1950495 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-1734325/.minikube/addons for local assets ...
	I0827 23:53:22.849182 1950495 filesync.go:126] Scanning /home/jenkins/minikube-integration/19522-1734325/.minikube/files for local assets ...
	I0827 23:53:22.849273 1950495 filesync.go:149] local asset: /home/jenkins/minikube-integration/19522-1734325/.minikube/files/etc/ssl/certs/17397152.pem -> 17397152.pem in /etc/ssl/certs
	I0827 23:53:22.849399 1950495 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0827 23:53:22.858445 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/files/etc/ssl/certs/17397152.pem --> /etc/ssl/certs/17397152.pem (1708 bytes)
	I0827 23:53:22.883973 1950495 start.go:296] duration metric: took 158.234883ms for postStartSetup
	I0827 23:53:22.884103 1950495 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:53:22.884163 1950495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-710826
	I0827 23:53:22.902025 1950495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/no-preload-710826/id_rsa Username:docker}
	I0827 23:53:23.003123 1950495 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0827 23:53:23.013167 1950495 fix.go:56] duration metric: took 4.956903456s for fixHost
	I0827 23:53:23.013244 1950495 start.go:83] releasing machines lock for "no-preload-710826", held for 4.957009005s
	I0827 23:53:23.013371 1950495 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-710826
	I0827 23:53:23.033428 1950495 ssh_runner.go:195] Run: cat /version.json
	I0827 23:53:23.033532 1950495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-710826
	I0827 23:53:23.033838 1950495 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0827 23:53:23.033905 1950495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-710826
	I0827 23:53:23.060717 1950495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/no-preload-710826/id_rsa Username:docker}
	I0827 23:53:23.061254 1950495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/no-preload-710826/id_rsa Username:docker}
	I0827 23:53:23.305856 1950495 ssh_runner.go:195] Run: systemctl --version
	I0827 23:53:23.310457 1950495 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0827 23:53:23.314927 1950495 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0827 23:53:23.334104 1950495 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0827 23:53:23.334193 1950495 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0827 23:53:23.344278 1950495 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0827 23:53:23.344322 1950495 start.go:495] detecting cgroup driver to use...
	I0827 23:53:23.344415 1950495 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0827 23:53:23.344519 1950495 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0827 23:53:23.358938 1950495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0827 23:53:23.371215 1950495 docker.go:217] disabling cri-docker service (if available) ...
	I0827 23:53:23.371327 1950495 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0827 23:53:23.385487 1950495 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0827 23:53:23.398443 1950495 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0827 23:53:23.502113 1950495 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0827 23:53:23.590966 1950495 docker.go:233] disabling docker service ...
	I0827 23:53:23.591077 1950495 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0827 23:53:23.606166 1950495 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0827 23:53:23.618298 1950495 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0827 23:53:23.713960 1950495 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0827 23:53:23.798702 1950495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0827 23:53:23.811797 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0827 23:53:23.829247 1950495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0827 23:53:23.841627 1950495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0827 23:53:23.852046 1950495 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0827 23:53:23.852119 1950495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0827 23:53:23.862704 1950495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 23:53:23.873506 1950495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0827 23:53:23.883606 1950495 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0827 23:53:23.894863 1950495 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0827 23:53:23.905094 1950495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0827 23:53:23.915514 1950495 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0827 23:53:23.926906 1950495 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0827 23:53:23.937794 1950495 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0827 23:53:23.948502 1950495 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0827 23:53:23.957377 1950495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:53:24.065185 1950495 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0827 23:53:24.239813 1950495 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0827 23:53:24.239885 1950495 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0827 23:53:24.244810 1950495 start.go:563] Will wait 60s for crictl version
	I0827 23:53:24.244874 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:53:24.248728 1950495 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0827 23:53:24.290708 1950495 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0827 23:53:24.290818 1950495 ssh_runner.go:195] Run: containerd --version
	I0827 23:53:24.316016 1950495 ssh_runner.go:195] Run: containerd --version
	I0827 23:53:24.342879 1950495 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0827 23:53:24.345122 1950495 cli_runner.go:164] Run: docker network inspect no-preload-710826 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0827 23:53:24.368559 1950495 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0827 23:53:24.372601 1950495 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 23:53:24.384817 1950495 kubeadm.go:883] updating cluster {Name:no-preload-710826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-710826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0827 23:53:24.384939 1950495 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0827 23:53:24.384983 1950495 ssh_runner.go:195] Run: sudo crictl images --output json
	I0827 23:53:24.427485 1950495 containerd.go:627] all images are preloaded for containerd runtime.
	I0827 23:53:24.427518 1950495 cache_images.go:84] Images are preloaded, skipping loading
	I0827 23:53:24.427527 1950495 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.0 containerd true true} ...
	I0827 23:53:24.427686 1950495 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-710826 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:no-preload-710826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0827 23:53:24.427763 1950495 ssh_runner.go:195] Run: sudo crictl info
	I0827 23:53:24.471686 1950495 cni.go:84] Creating CNI manager for ""
	I0827 23:53:24.471713 1950495 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0827 23:53:24.471732 1950495 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0827 23:53:24.471789 1950495 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-710826 NodeName:no-preload-710826 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0827 23:53:24.471947 1950495 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-710826"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0827 23:53:24.472022 1950495 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0827 23:53:24.482730 1950495 binaries.go:44] Found k8s binaries, skipping transfer
	I0827 23:53:24.482809 1950495 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0827 23:53:24.492714 1950495 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0827 23:53:24.522735 1950495 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0827 23:53:24.543828 1950495 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0827 23:53:24.565336 1950495 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0827 23:53:24.569234 1950495 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0827 23:53:24.580607 1950495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:53:24.669719 1950495 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 23:53:24.687687 1950495 certs.go:68] Setting up /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826 for IP: 192.168.85.2
	I0827 23:53:24.687706 1950495 certs.go:194] generating shared ca certs ...
	I0827 23:53:24.687722 1950495 certs.go:226] acquiring lock for ca certs: {Name:mkd3d47e0a7419f9dbeb7a4e1a68db1090a3adb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:53:24.687903 1950495 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.key
	I0827 23:53:24.687980 1950495 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.key
	I0827 23:53:24.687994 1950495 certs.go:256] generating profile certs ...
	I0827 23:53:24.688099 1950495 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.key
	I0827 23:53:24.688195 1950495 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/apiserver.key.c3f41291
	I0827 23:53:24.688255 1950495 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/proxy-client.key
	I0827 23:53:24.688501 1950495 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/1739715.pem (1338 bytes)
	W0827 23:53:24.688564 1950495 certs.go:480] ignoring /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/1739715_empty.pem, impossibly tiny 0 bytes
	I0827 23:53:24.688579 1950495 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca-key.pem (1675 bytes)
	I0827 23:53:24.688605 1950495 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/ca.pem (1078 bytes)
	I0827 23:53:24.688662 1950495 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/cert.pem (1123 bytes)
	I0827 23:53:24.688690 1950495 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/key.pem (1675 bytes)
	I0827 23:53:24.688749 1950495 certs.go:484] found cert: /home/jenkins/minikube-integration/19522-1734325/.minikube/files/etc/ssl/certs/17397152.pem (1708 bytes)
	I0827 23:53:24.689451 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0827 23:53:24.718773 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0827 23:53:24.747521 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0827 23:53:24.773711 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0827 23:53:24.800557 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0827 23:53:24.834001 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0827 23:53:24.885412 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0827 23:53:24.921989 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0827 23:53:24.961821 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0827 23:53:24.991943 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/certs/1739715.pem --> /usr/share/ca-certificates/1739715.pem (1338 bytes)
	I0827 23:53:25.038575 1950495 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19522-1734325/.minikube/files/etc/ssl/certs/17397152.pem --> /usr/share/ca-certificates/17397152.pem (1708 bytes)
	I0827 23:53:25.073371 1950495 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0827 23:53:25.094726 1950495 ssh_runner.go:195] Run: openssl version
	I0827 23:53:25.102568 1950495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/17397152.pem && ln -fs /usr/share/ca-certificates/17397152.pem /etc/ssl/certs/17397152.pem"
	I0827 23:53:25.113983 1950495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/17397152.pem
	I0827 23:53:25.118330 1950495 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 27 23:12 /usr/share/ca-certificates/17397152.pem
	I0827 23:53:25.118407 1950495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/17397152.pem
	I0827 23:53:25.125911 1950495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/17397152.pem /etc/ssl/certs/3ec20f2e.0"
	I0827 23:53:25.137223 1950495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0827 23:53:25.149921 1950495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:53:25.154561 1950495 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 27 23:02 /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:53:25.154641 1950495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0827 23:53:25.162866 1950495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0827 23:53:25.173176 1950495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1739715.pem && ln -fs /usr/share/ca-certificates/1739715.pem /etc/ssl/certs/1739715.pem"
	I0827 23:53:25.184987 1950495 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1739715.pem
	I0827 23:53:25.189435 1950495 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 27 23:12 /usr/share/ca-certificates/1739715.pem
	I0827 23:53:25.189539 1950495 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1739715.pem
	I0827 23:53:25.197705 1950495 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1739715.pem /etc/ssl/certs/51391683.0"
	I0827 23:53:25.207896 1950495 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0827 23:53:25.212272 1950495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0827 23:53:25.219565 1950495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0827 23:53:25.226665 1950495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0827 23:53:25.234194 1950495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0827 23:53:25.241856 1950495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0827 23:53:25.249236 1950495 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0827 23:53:25.257750 1950495 kubeadm.go:392] StartCluster: {Name:no-preload-710826 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:no-preload-710826 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:53:25.257871 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0827 23:53:25.257951 1950495 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0827 23:53:25.301249 1950495 cri.go:89] found id: "41e2c7c6f8a29f512bb045de06399f7167dc694f67357f3626de21be8761a02e"
	I0827 23:53:25.301274 1950495 cri.go:89] found id: "5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8"
	I0827 23:53:25.301280 1950495 cri.go:89] found id: "9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03"
	I0827 23:53:25.301286 1950495 cri.go:89] found id: "e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e"
	I0827 23:53:25.301290 1950495 cri.go:89] found id: "7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d"
	I0827 23:53:25.301293 1950495 cri.go:89] found id: "b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58"
	I0827 23:53:25.301316 1950495 cri.go:89] found id: "1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b"
	I0827 23:53:25.301324 1950495 cri.go:89] found id: "3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0"
	I0827 23:53:25.301334 1950495 cri.go:89] found id: ""
	I0827 23:53:25.301410 1950495 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0827 23:53:25.314429 1950495 cri.go:116] JSON = null
	W0827 23:53:25.314480 1950495 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0827 23:53:25.314563 1950495 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0827 23:53:25.323601 1950495 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0827 23:53:25.323623 1950495 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0827 23:53:25.323700 1950495 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0827 23:53:25.346116 1950495 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0827 23:53:25.346791 1950495 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-710826" does not appear in /home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:53:25.347111 1950495 kubeconfig.go:62] /home/jenkins/minikube-integration/19522-1734325/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-710826" cluster setting kubeconfig missing "no-preload-710826" context setting]
	I0827 23:53:25.347697 1950495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/kubeconfig: {Name:mkbc2349839e7e640d3be8c9c9dabdbaf532417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:53:25.349419 1950495 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0827 23:53:25.363694 1950495 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0827 23:53:25.363730 1950495 kubeadm.go:597] duration metric: took 40.100698ms to restartPrimaryControlPlane
	I0827 23:53:25.363739 1950495 kubeadm.go:394] duration metric: took 105.998464ms to StartCluster
	I0827 23:53:25.363786 1950495 settings.go:142] acquiring lock: {Name:mk2abdfb376a9e7540e648c96e5aaa1709f13213 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:53:25.363867 1950495 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:53:25.365369 1950495 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/kubeconfig: {Name:mkbc2349839e7e640d3be8c9c9dabdbaf532417a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:53:25.365678 1950495 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0827 23:53:25.365945 1950495 config.go:182] Loaded profile config "no-preload-710826": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:53:25.366126 1950495 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0827 23:53:25.366393 1950495 addons.go:69] Setting storage-provisioner=true in profile "no-preload-710826"
	I0827 23:53:25.366435 1950495 addons.go:234] Setting addon storage-provisioner=true in "no-preload-710826"
	W0827 23:53:25.366445 1950495 addons.go:243] addon storage-provisioner should already be in state true
	I0827 23:53:25.366476 1950495 host.go:66] Checking if "no-preload-710826" exists ...
	I0827 23:53:25.366893 1950495 addons.go:69] Setting default-storageclass=true in profile "no-preload-710826"
	I0827 23:53:25.366930 1950495 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-710826"
	I0827 23:53:25.367196 1950495 cli_runner.go:164] Run: docker container inspect no-preload-710826 --format={{.State.Status}}
	I0827 23:53:25.367559 1950495 cli_runner.go:164] Run: docker container inspect no-preload-710826 --format={{.State.Status}}
	I0827 23:53:25.371237 1950495 addons.go:69] Setting dashboard=true in profile "no-preload-710826"
	I0827 23:53:25.372847 1950495 addons.go:234] Setting addon dashboard=true in "no-preload-710826"
	W0827 23:53:25.372867 1950495 addons.go:243] addon dashboard should already be in state true
	I0827 23:53:25.372912 1950495 host.go:66] Checking if "no-preload-710826" exists ...
	I0827 23:53:25.372914 1950495 out.go:177] * Verifying Kubernetes components...
	I0827 23:53:25.373385 1950495 cli_runner.go:164] Run: docker container inspect no-preload-710826 --format={{.State.Status}}
	I0827 23:53:25.372794 1950495 addons.go:69] Setting metrics-server=true in profile "no-preload-710826"
	I0827 23:53:25.373550 1950495 addons.go:234] Setting addon metrics-server=true in "no-preload-710826"
	W0827 23:53:25.373565 1950495 addons.go:243] addon metrics-server should already be in state true
	I0827 23:53:25.373588 1950495 host.go:66] Checking if "no-preload-710826" exists ...
	I0827 23:53:25.374050 1950495 cli_runner.go:164] Run: docker container inspect no-preload-710826 --format={{.State.Status}}
	I0827 23:53:25.378967 1950495 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0827 23:53:25.429526 1950495 addons.go:234] Setting addon default-storageclass=true in "no-preload-710826"
	W0827 23:53:25.429547 1950495 addons.go:243] addon default-storageclass should already be in state true
	I0827 23:53:25.429573 1950495 host.go:66] Checking if "no-preload-710826" exists ...
	I0827 23:53:25.429985 1950495 cli_runner.go:164] Run: docker container inspect no-preload-710826 --format={{.State.Status}}
	I0827 23:53:25.448432 1950495 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0827 23:53:25.452580 1950495 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0827 23:53:25.454490 1950495 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0827 23:53:25.454522 1950495 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0827 23:53:25.454606 1950495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-710826
	I0827 23:53:25.466468 1950495 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0827 23:53:25.469017 1950495 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 23:53:25.469042 1950495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0827 23:53:25.469107 1950495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-710826
	I0827 23:53:25.475756 1950495 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0827 23:53:25.477462 1950495 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0827 23:53:25.477490 1950495 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0827 23:53:25.477567 1950495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-710826
	I0827 23:53:25.508659 1950495 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0827 23:53:25.508680 1950495 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0827 23:53:25.508748 1950495 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-710826
	I0827 23:53:25.535271 1950495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/no-preload-710826/id_rsa Username:docker}
	I0827 23:53:25.535958 1950495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/no-preload-710826/id_rsa Username:docker}
	I0827 23:53:25.557617 1950495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/no-preload-710826/id_rsa Username:docker}
	I0827 23:53:25.569728 1950495 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/no-preload-710826/id_rsa Username:docker}
	I0827 23:53:25.583921 1950495 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0827 23:53:25.690923 1950495 node_ready.go:35] waiting up to 6m0s for node "no-preload-710826" to be "Ready" ...
	I0827 23:53:25.789200 1950495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0827 23:53:25.821875 1950495 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0827 23:53:25.821942 1950495 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0827 23:53:25.878215 1950495 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0827 23:53:25.878280 1950495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0827 23:53:25.942319 1950495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0827 23:53:25.962579 1950495 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0827 23:53:25.962656 1950495 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0827 23:53:26.051374 1950495 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0827 23:53:26.051449 1950495 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0827 23:53:26.255611 1950495 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0827 23:53:26.255688 1950495 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0827 23:53:26.310706 1950495 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0827 23:53:26.310775 1950495 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0827 23:53:26.524063 1950495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0827 23:53:26.527483 1950495 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0827 23:53:26.527559 1950495 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0827 23:53:26.649684 1950495 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0827 23:53:26.649757 1950495 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0827 23:53:26.728832 1950495 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0827 23:53:26.728902 1950495 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0827 23:53:26.768802 1950495 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0827 23:53:26.768880 1950495 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0827 23:53:26.823928 1950495 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0827 23:53:26.824004 1950495 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0827 23:53:26.859100 1950495 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0827 23:53:26.859173 1950495 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0827 23:53:26.886373 1950495 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0827 23:53:23.515888 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:25.523375 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:31.675648 1950495 node_ready.go:49] node "no-preload-710826" has status "Ready":"True"
	I0827 23:53:31.675679 1950495 node_ready.go:38] duration metric: took 5.9846765s for node "no-preload-710826" to be "Ready" ...
	I0827 23:53:31.675689 1950495 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:53:31.686663 1950495 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-2qvsz" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:31.695286 1950495 pod_ready.go:93] pod "coredns-6f6b679f8f-2qvsz" in "kube-system" namespace has status "Ready":"True"
	I0827 23:53:31.695315 1950495 pod_ready.go:82] duration metric: took 8.612547ms for pod "coredns-6f6b679f8f-2qvsz" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:31.695327 1950495 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-710826" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:31.702424 1950495 pod_ready.go:93] pod "etcd-no-preload-710826" in "kube-system" namespace has status "Ready":"True"
	I0827 23:53:31.702451 1950495 pod_ready.go:82] duration metric: took 7.115439ms for pod "etcd-no-preload-710826" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:31.702470 1950495 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-710826" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:31.729321 1950495 pod_ready.go:93] pod "kube-apiserver-no-preload-710826" in "kube-system" namespace has status "Ready":"True"
	I0827 23:53:31.729349 1950495 pod_ready.go:82] duration metric: took 26.870677ms for pod "kube-apiserver-no-preload-710826" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:31.729363 1950495 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-710826" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:31.736077 1950495 pod_ready.go:93] pod "kube-controller-manager-no-preload-710826" in "kube-system" namespace has status "Ready":"True"
	I0827 23:53:31.736103 1950495 pod_ready.go:82] duration metric: took 6.731201ms for pod "kube-controller-manager-no-preload-710826" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:31.736117 1950495 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n47gz" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:31.879407 1950495 pod_ready.go:93] pod "kube-proxy-n47gz" in "kube-system" namespace has status "Ready":"True"
	I0827 23:53:31.879433 1950495 pod_ready.go:82] duration metric: took 143.30847ms for pod "kube-proxy-n47gz" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:31.879446 1950495 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-710826" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:28.012706 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:30.147743 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:32.516483 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:33.886497 1950495 pod_ready.go:103] pod "kube-scheduler-no-preload-710826" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:34.823303 1950495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.034026578s)
	I0827 23:53:34.823354 1950495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.88096074s)
	I0827 23:53:34.823587 1950495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.299445011s)
	I0827 23:53:34.823609 1950495 addons.go:475] Verifying addon metrics-server=true in "no-preload-710826"
	I0827 23:53:35.108559 1950495 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.222078211s)
	I0827 23:53:35.110391 1950495 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-710826 addons enable metrics-server
	
	I0827 23:53:35.112157 1950495 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0827 23:53:35.113902 1950495 addons.go:510] duration metric: took 9.747767367s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0827 23:53:36.386636 1950495 pod_ready.go:103] pod "kube-scheduler-no-preload-710826" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:35.017465 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:37.511824 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:38.387098 1950495 pod_ready.go:103] pod "kube-scheduler-no-preload-710826" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:40.886839 1950495 pod_ready.go:103] pod "kube-scheduler-no-preload-710826" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:39.514454 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:41.514541 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:43.386009 1950495 pod_ready.go:93] pod "kube-scheduler-no-preload-710826" in "kube-system" namespace has status "Ready":"True"
	I0827 23:53:43.386035 1950495 pod_ready.go:82] duration metric: took 11.506580804s for pod "kube-scheduler-no-preload-710826" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:43.386047 1950495 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace to be "Ready" ...
	I0827 23:53:45.394073 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:43.519764 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:46.021968 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:47.892343 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:49.892783 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:52.393405 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:48.023259 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:50.024458 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:52.511505 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:54.892671 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:57.392396 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:54.513571 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:57.022234 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:59.392601 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:01.392899 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:53:59.512743 1945499 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:00.513778 1945499 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"True"
	I0827 23:54:00.513813 1945499 pod_ready.go:82] duration metric: took 1m22.008565732s for pod "kube-controller-manager-old-k8s-version-394049" in "kube-system" namespace to be "Ready" ...
	I0827 23:54:00.513827 1945499 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-d84wl" in "kube-system" namespace to be "Ready" ...
	I0827 23:54:00.522113 1945499 pod_ready.go:93] pod "kube-proxy-d84wl" in "kube-system" namespace has status "Ready":"True"
	I0827 23:54:00.522142 1945499 pod_ready.go:82] duration metric: took 8.306412ms for pod "kube-proxy-d84wl" in "kube-system" namespace to be "Ready" ...
	I0827 23:54:00.522155 1945499 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-394049" in "kube-system" namespace to be "Ready" ...
	I0827 23:54:00.528947 1945499 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-394049" in "kube-system" namespace has status "Ready":"True"
	I0827 23:54:00.528983 1945499 pod_ready.go:82] duration metric: took 6.820274ms for pod "kube-scheduler-old-k8s-version-394049" in "kube-system" namespace to be "Ready" ...
	I0827 23:54:00.528997 1945499 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace to be "Ready" ...
	I0827 23:54:02.538013 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:03.892725 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:05.898401 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:05.049335 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:07.536856 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:08.392761 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:10.392803 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:09.544754 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:12.036784 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:12.893635 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:15.393304 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:17.395448 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:14.037634 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:16.544787 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:19.892052 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:22.392752 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:19.036321 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:21.041602 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:24.891711 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:26.892650 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:23.535686 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:25.536146 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:27.540761 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:29.392339 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:31.895229 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:30.071722 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:32.538311 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:34.393973 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:36.394292 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:35.039074 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:37.535387 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:38.892647 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:41.392169 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:39.537746 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:41.542557 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:43.394839 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:45.893185 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:44.037226 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:46.535616 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:48.392468 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:50.392903 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:48.537805 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:50.542500 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:52.891933 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:54.892619 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:57.392557 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:53.036589 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:55.068559 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:57.535050 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:59.892106 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:01.892891 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:54:59.536221 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:01.541466 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:03.894003 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:06.392067 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:04.036840 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:06.037668 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:08.392170 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:10.891746 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:08.540614 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:11.040996 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:12.892528 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:14.896991 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:17.392794 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:13.536098 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:16.036958 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:19.392932 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:21.892600 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:18.037758 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:20.059068 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:22.542251 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:24.393620 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:26.893071 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:25.055999 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:27.549747 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:29.391539 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:31.392269 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:30.088038 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:32.535845 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:33.392940 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:35.892653 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:35.042825 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:37.537449 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:38.391836 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:40.391901 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:42.392225 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:39.541413 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:42.036947 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:44.392859 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:46.891491 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:44.541623 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:47.037061 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:48.899586 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:51.391460 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:49.536624 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:51.540666 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:53.392492 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:55.392571 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:54.039877 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:56.041431 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:57.891769 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:59.895284 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:02.392294 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:55:58.535820 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:00.569693 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:04.893757 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:07.393129 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:03.036146 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:05.063202 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:07.535319 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:09.892107 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:12.391720 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:09.542740 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:12.036240 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:14.392747 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:16.892489 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:14.041915 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:16.538564 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:18.893341 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:21.395832 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:18.539387 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:20.542941 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:23.892727 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:26.392776 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:23.035989 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:25.041259 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:27.535751 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:28.891774 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:30.891889 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:30.051150 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:32.541544 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:32.893338 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:35.392006 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:35.052137 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:37.541729 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:37.892150 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:40.392697 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:39.543135 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:42.044288 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:42.892240 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:44.892866 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:47.392565 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:44.536100 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:47.036663 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:49.892970 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:52.392281 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:49.541661 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:52.041046 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:54.893333 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:57.391963 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:54.545514 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:57.037925 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:59.392691 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:01.393790 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:56:59.537151 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:01.541807 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:03.892019 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:05.892143 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:04.036301 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:06.037323 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:08.391443 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:10.392104 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:12.392539 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:08.037761 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:10.061967 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:12.085447 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:14.892273 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:16.892703 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:14.549156 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:17.035779 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:19.393993 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:21.892544 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:19.036443 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:21.039024 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:24.392967 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:26.393588 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:23.536705 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:25.537439 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:28.397047 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:30.892404 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:28.037022 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:30.047408 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:32.535922 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:33.392798 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:35.892896 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:34.538361 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:37.040054 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:38.392874 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:40.393058 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:42.393262 1950495 pod_ready.go:103] pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:39.543065 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:41.543734 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:43.392530 1950495 pod_ready.go:82] duration metric: took 4m0.006469226s for pod "metrics-server-6867b74b74-shq79" in "kube-system" namespace to be "Ready" ...
	E0827 23:57:43.392557 1950495 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0827 23:57:43.392567 1950495 pod_ready.go:39] duration metric: took 4m11.716866754s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:57:43.392583 1950495 api_server.go:52] waiting for apiserver process to appear ...
	I0827 23:57:43.392612 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0827 23:57:43.392681 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0827 23:57:43.444514 1950495 cri.go:89] found id: "fe84704b7ec75f3193ef02843fe7925f27be359c6442cfaf08b9f6331b56d51d"
	I0827 23:57:43.444535 1950495 cri.go:89] found id: "7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d"
	I0827 23:57:43.444540 1950495 cri.go:89] found id: ""
	I0827 23:57:43.444548 1950495 logs.go:276] 2 containers: [fe84704b7ec75f3193ef02843fe7925f27be359c6442cfaf08b9f6331b56d51d 7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d]
	I0827 23:57:43.444608 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.448410 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.451930 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0827 23:57:43.452055 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0827 23:57:43.492010 1950495 cri.go:89] found id: "443f90b5f48af310c2a0b29003f0492d900d5b22c61ca83e0398e565e9627e8b"
	I0827 23:57:43.492040 1950495 cri.go:89] found id: "1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b"
	I0827 23:57:43.492046 1950495 cri.go:89] found id: ""
	I0827 23:57:43.492053 1950495 logs.go:276] 2 containers: [443f90b5f48af310c2a0b29003f0492d900d5b22c61ca83e0398e565e9627e8b 1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b]
	I0827 23:57:43.492150 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.496054 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.499406 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0827 23:57:43.499482 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0827 23:57:43.552911 1950495 cri.go:89] found id: "9c280433fc76d28f921c042176fd9bb1616fb0c3c3e3b7fb9300c083169f45f5"
	I0827 23:57:43.552939 1950495 cri.go:89] found id: "5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8"
	I0827 23:57:43.552944 1950495 cri.go:89] found id: ""
	I0827 23:57:43.552952 1950495 logs.go:276] 2 containers: [9c280433fc76d28f921c042176fd9bb1616fb0c3c3e3b7fb9300c083169f45f5 5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8]
	I0827 23:57:43.553012 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.557540 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.561265 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0827 23:57:43.561347 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0827 23:57:43.606790 1950495 cri.go:89] found id: "0d1cf85ef119b2b508445f437ec6a72b6e9b5d923a65336c1472928c8afc5e4f"
	I0827 23:57:43.606816 1950495 cri.go:89] found id: "b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58"
	I0827 23:57:43.606821 1950495 cri.go:89] found id: ""
	I0827 23:57:43.606828 1950495 logs.go:276] 2 containers: [0d1cf85ef119b2b508445f437ec6a72b6e9b5d923a65336c1472928c8afc5e4f b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58]
	I0827 23:57:43.606885 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.610382 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.614032 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0827 23:57:43.614116 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0827 23:57:43.660181 1950495 cri.go:89] found id: "253f4b2cc40f2d9b5d44222a444d2a4bc7992f0997e349067086d6afbbb4630a"
	I0827 23:57:43.660251 1950495 cri.go:89] found id: "e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e"
	I0827 23:57:43.660270 1950495 cri.go:89] found id: ""
	I0827 23:57:43.660293 1950495 logs.go:276] 2 containers: [253f4b2cc40f2d9b5d44222a444d2a4bc7992f0997e349067086d6afbbb4630a e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e]
	I0827 23:57:43.660402 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.664144 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.667637 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0827 23:57:43.667711 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0827 23:57:43.710757 1950495 cri.go:89] found id: "cb24367aaaecbaac8d5793d07ed4e77d1fe624807621b2d14b96f71eb8a3e865"
	I0827 23:57:43.710783 1950495 cri.go:89] found id: "3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0"
	I0827 23:57:43.710788 1950495 cri.go:89] found id: ""
	I0827 23:57:43.710796 1950495 logs.go:276] 2 containers: [cb24367aaaecbaac8d5793d07ed4e77d1fe624807621b2d14b96f71eb8a3e865 3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0]
	I0827 23:57:43.710883 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.714934 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.718700 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0827 23:57:43.718825 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0827 23:57:43.759892 1950495 cri.go:89] found id: "abde7dff43fa43bf43a929f61cf84f0eb9f24c56d99b9aa35541d66d431bc5fd"
	I0827 23:57:43.759918 1950495 cri.go:89] found id: "9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03"
	I0827 23:57:43.759923 1950495 cri.go:89] found id: ""
	I0827 23:57:43.759930 1950495 logs.go:276] 2 containers: [abde7dff43fa43bf43a929f61cf84f0eb9f24c56d99b9aa35541d66d431bc5fd 9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03]
	I0827 23:57:43.760009 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.763755 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.766957 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0827 23:57:43.767030 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0827 23:57:43.809468 1950495 cri.go:89] found id: "37375e3ac2ad29c8f5d1122c62552e0a2c394a7ec2cca31caade50b5c68ec57f"
	I0827 23:57:43.809540 1950495 cri.go:89] found id: ""
	I0827 23:57:43.809557 1950495 logs.go:276] 1 containers: [37375e3ac2ad29c8f5d1122c62552e0a2c394a7ec2cca31caade50b5c68ec57f]
	I0827 23:57:43.809625 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.813586 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0827 23:57:43.813706 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0827 23:57:43.852825 1950495 cri.go:89] found id: "2d3fde405c4f70c241a237307888dbc766233bc3a92b248405db0a2742e2a707"
	I0827 23:57:43.852857 1950495 cri.go:89] found id: "e300611d2e10fce610e01a8992253bd8224a55769ac78984311fc121390a2bd9"
	I0827 23:57:43.852862 1950495 cri.go:89] found id: ""
	I0827 23:57:43.852869 1950495 logs.go:276] 2 containers: [2d3fde405c4f70c241a237307888dbc766233bc3a92b248405db0a2742e2a707 e300611d2e10fce610e01a8992253bd8224a55769ac78984311fc121390a2bd9]
	I0827 23:57:43.852945 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.856959 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:43.860440 1950495 logs.go:123] Gathering logs for kube-apiserver [fe84704b7ec75f3193ef02843fe7925f27be359c6442cfaf08b9f6331b56d51d] ...
	I0827 23:57:43.860482 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe84704b7ec75f3193ef02843fe7925f27be359c6442cfaf08b9f6331b56d51d"
	I0827 23:57:43.918957 1950495 logs.go:123] Gathering logs for containerd ...
	I0827 23:57:43.918993 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0827 23:57:43.990451 1950495 logs.go:123] Gathering logs for kubelet ...
	I0827 23:57:43.990488 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 23:57:44.072771 1950495 logs.go:123] Gathering logs for kube-apiserver [7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d] ...
	I0827 23:57:44.072806 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d"
	I0827 23:57:44.130927 1950495 logs.go:123] Gathering logs for coredns [9c280433fc76d28f921c042176fd9bb1616fb0c3c3e3b7fb9300c083169f45f5] ...
	I0827 23:57:44.130960 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c280433fc76d28f921c042176fd9bb1616fb0c3c3e3b7fb9300c083169f45f5"
	I0827 23:57:44.183451 1950495 logs.go:123] Gathering logs for kube-scheduler [0d1cf85ef119b2b508445f437ec6a72b6e9b5d923a65336c1472928c8afc5e4f] ...
	I0827 23:57:44.183479 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d1cf85ef119b2b508445f437ec6a72b6e9b5d923a65336c1472928c8afc5e4f"
	I0827 23:57:44.231935 1950495 logs.go:123] Gathering logs for kube-scheduler [b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58] ...
	I0827 23:57:44.231963 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58"
	I0827 23:57:44.284125 1950495 logs.go:123] Gathering logs for kube-proxy [253f4b2cc40f2d9b5d44222a444d2a4bc7992f0997e349067086d6afbbb4630a] ...
	I0827 23:57:44.284168 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 253f4b2cc40f2d9b5d44222a444d2a4bc7992f0997e349067086d6afbbb4630a"
	I0827 23:57:44.325749 1950495 logs.go:123] Gathering logs for kube-controller-manager [cb24367aaaecbaac8d5793d07ed4e77d1fe624807621b2d14b96f71eb8a3e865] ...
	I0827 23:57:44.325780 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb24367aaaecbaac8d5793d07ed4e77d1fe624807621b2d14b96f71eb8a3e865"
	I0827 23:57:44.425509 1950495 logs.go:123] Gathering logs for kubernetes-dashboard [37375e3ac2ad29c8f5d1122c62552e0a2c394a7ec2cca31caade50b5c68ec57f] ...
	I0827 23:57:44.425543 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37375e3ac2ad29c8f5d1122c62552e0a2c394a7ec2cca31caade50b5c68ec57f"
	I0827 23:57:44.469487 1950495 logs.go:123] Gathering logs for dmesg ...
	I0827 23:57:44.469513 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 23:57:44.487776 1950495 logs.go:123] Gathering logs for storage-provisioner [e300611d2e10fce610e01a8992253bd8224a55769ac78984311fc121390a2bd9] ...
	I0827 23:57:44.487810 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e300611d2e10fce610e01a8992253bd8224a55769ac78984311fc121390a2bd9"
	I0827 23:57:44.527115 1950495 logs.go:123] Gathering logs for etcd [1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b] ...
	I0827 23:57:44.527142 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b"
	I0827 23:57:44.583321 1950495 logs.go:123] Gathering logs for kube-proxy [e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e] ...
	I0827 23:57:44.583369 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e"
	I0827 23:57:44.636771 1950495 logs.go:123] Gathering logs for kindnet [9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03] ...
	I0827 23:57:44.636800 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03"
	I0827 23:57:44.678482 1950495 logs.go:123] Gathering logs for storage-provisioner [2d3fde405c4f70c241a237307888dbc766233bc3a92b248405db0a2742e2a707] ...
	I0827 23:57:44.678514 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3fde405c4f70c241a237307888dbc766233bc3a92b248405db0a2742e2a707"
	I0827 23:57:44.718429 1950495 logs.go:123] Gathering logs for container status ...
	I0827 23:57:44.718459 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 23:57:44.772877 1950495 logs.go:123] Gathering logs for describe nodes ...
	I0827 23:57:44.772911 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 23:57:44.950302 1950495 logs.go:123] Gathering logs for coredns [5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8] ...
	I0827 23:57:44.950413 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8"
	I0827 23:57:44.998849 1950495 logs.go:123] Gathering logs for kube-controller-manager [3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0] ...
	I0827 23:57:44.998885 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0"
	I0827 23:57:45.203171 1950495 logs.go:123] Gathering logs for kindnet [abde7dff43fa43bf43a929f61cf84f0eb9f24c56d99b9aa35541d66d431bc5fd] ...
	I0827 23:57:45.203216 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abde7dff43fa43bf43a929f61cf84f0eb9f24c56d99b9aa35541d66d431bc5fd"
	I0827 23:57:45.270709 1950495 logs.go:123] Gathering logs for etcd [443f90b5f48af310c2a0b29003f0492d900d5b22c61ca83e0398e565e9627e8b] ...
	I0827 23:57:45.270750 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 443f90b5f48af310c2a0b29003f0492d900d5b22c61ca83e0398e565e9627e8b"
	I0827 23:57:44.037557 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:46.535805 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:47.838045 1950495 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:57:47.851201 1950495 api_server.go:72] duration metric: took 4m22.485478879s to wait for apiserver process to appear ...
	I0827 23:57:47.851234 1950495 api_server.go:88] waiting for apiserver healthz status ...
	I0827 23:57:47.851272 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0827 23:57:47.851332 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0827 23:57:47.889350 1950495 cri.go:89] found id: "fe84704b7ec75f3193ef02843fe7925f27be359c6442cfaf08b9f6331b56d51d"
	I0827 23:57:47.889373 1950495 cri.go:89] found id: "7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d"
	I0827 23:57:47.889379 1950495 cri.go:89] found id: ""
	I0827 23:57:47.889386 1950495 logs.go:276] 2 containers: [fe84704b7ec75f3193ef02843fe7925f27be359c6442cfaf08b9f6331b56d51d 7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d]
	I0827 23:57:47.889450 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:47.893274 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:47.897270 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0827 23:57:47.897385 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0827 23:57:47.944355 1950495 cri.go:89] found id: "443f90b5f48af310c2a0b29003f0492d900d5b22c61ca83e0398e565e9627e8b"
	I0827 23:57:47.944467 1950495 cri.go:89] found id: "1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b"
	I0827 23:57:47.944473 1950495 cri.go:89] found id: ""
	I0827 23:57:47.944481 1950495 logs.go:276] 2 containers: [443f90b5f48af310c2a0b29003f0492d900d5b22c61ca83e0398e565e9627e8b 1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b]
	I0827 23:57:47.944572 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:47.948750 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:47.952681 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0827 23:57:47.952785 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0827 23:57:47.995078 1950495 cri.go:89] found id: "9c280433fc76d28f921c042176fd9bb1616fb0c3c3e3b7fb9300c083169f45f5"
	I0827 23:57:47.995153 1950495 cri.go:89] found id: "5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8"
	I0827 23:57:47.995173 1950495 cri.go:89] found id: ""
	I0827 23:57:47.995185 1950495 logs.go:276] 2 containers: [9c280433fc76d28f921c042176fd9bb1616fb0c3c3e3b7fb9300c083169f45f5 5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8]
	I0827 23:57:47.995243 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:47.999912 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:48.005506 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0827 23:57:48.005664 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0827 23:57:48.063158 1950495 cri.go:89] found id: "0d1cf85ef119b2b508445f437ec6a72b6e9b5d923a65336c1472928c8afc5e4f"
	I0827 23:57:48.063232 1950495 cri.go:89] found id: "b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58"
	I0827 23:57:48.063251 1950495 cri.go:89] found id: ""
	I0827 23:57:48.063274 1950495 logs.go:276] 2 containers: [0d1cf85ef119b2b508445f437ec6a72b6e9b5d923a65336c1472928c8afc5e4f b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58]
	I0827 23:57:48.063355 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:48.067136 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:48.071390 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0827 23:57:48.071569 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0827 23:57:48.111554 1950495 cri.go:89] found id: "253f4b2cc40f2d9b5d44222a444d2a4bc7992f0997e349067086d6afbbb4630a"
	I0827 23:57:48.111584 1950495 cri.go:89] found id: "e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e"
	I0827 23:57:48.111589 1950495 cri.go:89] found id: ""
	I0827 23:57:48.111597 1950495 logs.go:276] 2 containers: [253f4b2cc40f2d9b5d44222a444d2a4bc7992f0997e349067086d6afbbb4630a e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e]
	I0827 23:57:48.111660 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:48.115907 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:48.119524 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0827 23:57:48.119624 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0827 23:57:48.169736 1950495 cri.go:89] found id: "cb24367aaaecbaac8d5793d07ed4e77d1fe624807621b2d14b96f71eb8a3e865"
	I0827 23:57:48.169758 1950495 cri.go:89] found id: "3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0"
	I0827 23:57:48.169764 1950495 cri.go:89] found id: ""
	I0827 23:57:48.169772 1950495 logs.go:276] 2 containers: [cb24367aaaecbaac8d5793d07ed4e77d1fe624807621b2d14b96f71eb8a3e865 3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0]
	I0827 23:57:48.169837 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:48.173883 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:48.177742 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0827 23:57:48.177842 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0827 23:57:48.216575 1950495 cri.go:89] found id: "abde7dff43fa43bf43a929f61cf84f0eb9f24c56d99b9aa35541d66d431bc5fd"
	I0827 23:57:48.216642 1950495 cri.go:89] found id: "9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03"
	I0827 23:57:48.216661 1950495 cri.go:89] found id: ""
	I0827 23:57:48.216682 1950495 logs.go:276] 2 containers: [abde7dff43fa43bf43a929f61cf84f0eb9f24c56d99b9aa35541d66d431bc5fd 9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03]
	I0827 23:57:48.216766 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:48.220665 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:48.225494 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0827 23:57:48.225588 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0827 23:57:48.271566 1950495 cri.go:89] found id: "37375e3ac2ad29c8f5d1122c62552e0a2c394a7ec2cca31caade50b5c68ec57f"
	I0827 23:57:48.271593 1950495 cri.go:89] found id: ""
	I0827 23:57:48.271601 1950495 logs.go:276] 1 containers: [37375e3ac2ad29c8f5d1122c62552e0a2c394a7ec2cca31caade50b5c68ec57f]
	I0827 23:57:48.271658 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:48.275269 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0827 23:57:48.275346 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0827 23:57:48.318568 1950495 cri.go:89] found id: "2d3fde405c4f70c241a237307888dbc766233bc3a92b248405db0a2742e2a707"
	I0827 23:57:48.318593 1950495 cri.go:89] found id: "e300611d2e10fce610e01a8992253bd8224a55769ac78984311fc121390a2bd9"
	I0827 23:57:48.318598 1950495 cri.go:89] found id: ""
	I0827 23:57:48.318606 1950495 logs.go:276] 2 containers: [2d3fde405c4f70c241a237307888dbc766233bc3a92b248405db0a2742e2a707 e300611d2e10fce610e01a8992253bd8224a55769ac78984311fc121390a2bd9]
	I0827 23:57:48.318663 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:48.322304 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:48.325762 1950495 logs.go:123] Gathering logs for dmesg ...
	I0827 23:57:48.325790 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 23:57:48.342320 1950495 logs.go:123] Gathering logs for etcd [1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b] ...
	I0827 23:57:48.342362 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b"
	I0827 23:57:48.395893 1950495 logs.go:123] Gathering logs for kubernetes-dashboard [37375e3ac2ad29c8f5d1122c62552e0a2c394a7ec2cca31caade50b5c68ec57f] ...
	I0827 23:57:48.395924 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37375e3ac2ad29c8f5d1122c62552e0a2c394a7ec2cca31caade50b5c68ec57f"
	I0827 23:57:48.438042 1950495 logs.go:123] Gathering logs for container status ...
	I0827 23:57:48.438071 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 23:57:48.484157 1950495 logs.go:123] Gathering logs for kubelet ...
	I0827 23:57:48.484187 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 23:57:48.558011 1950495 logs.go:123] Gathering logs for coredns [5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8] ...
	I0827 23:57:48.558046 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8"
	I0827 23:57:48.596729 1950495 logs.go:123] Gathering logs for etcd [443f90b5f48af310c2a0b29003f0492d900d5b22c61ca83e0398e565e9627e8b] ...
	I0827 23:57:48.596758 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 443f90b5f48af310c2a0b29003f0492d900d5b22c61ca83e0398e565e9627e8b"
	I0827 23:57:48.641281 1950495 logs.go:123] Gathering logs for kube-scheduler [b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58] ...
	I0827 23:57:48.641313 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58"
	I0827 23:57:48.687212 1950495 logs.go:123] Gathering logs for kube-proxy [253f4b2cc40f2d9b5d44222a444d2a4bc7992f0997e349067086d6afbbb4630a] ...
	I0827 23:57:48.687246 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 253f4b2cc40f2d9b5d44222a444d2a4bc7992f0997e349067086d6afbbb4630a"
	I0827 23:57:48.726062 1950495 logs.go:123] Gathering logs for kube-proxy [e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e] ...
	I0827 23:57:48.726090 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e"
	I0827 23:57:48.774283 1950495 logs.go:123] Gathering logs for kindnet [9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03] ...
	I0827 23:57:48.774313 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03"
	I0827 23:57:48.818358 1950495 logs.go:123] Gathering logs for storage-provisioner [e300611d2e10fce610e01a8992253bd8224a55769ac78984311fc121390a2bd9] ...
	I0827 23:57:48.818391 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e300611d2e10fce610e01a8992253bd8224a55769ac78984311fc121390a2bd9"
	I0827 23:57:48.858739 1950495 logs.go:123] Gathering logs for containerd ...
	I0827 23:57:48.858770 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0827 23:57:48.926683 1950495 logs.go:123] Gathering logs for kube-apiserver [7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d] ...
	I0827 23:57:48.926722 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d"
	I0827 23:57:48.977218 1950495 logs.go:123] Gathering logs for kube-apiserver [fe84704b7ec75f3193ef02843fe7925f27be359c6442cfaf08b9f6331b56d51d] ...
	I0827 23:57:48.977258 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe84704b7ec75f3193ef02843fe7925f27be359c6442cfaf08b9f6331b56d51d"
	I0827 23:57:49.048545 1950495 logs.go:123] Gathering logs for coredns [9c280433fc76d28f921c042176fd9bb1616fb0c3c3e3b7fb9300c083169f45f5] ...
	I0827 23:57:49.048626 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c280433fc76d28f921c042176fd9bb1616fb0c3c3e3b7fb9300c083169f45f5"
	I0827 23:57:49.088620 1950495 logs.go:123] Gathering logs for kube-scheduler [0d1cf85ef119b2b508445f437ec6a72b6e9b5d923a65336c1472928c8afc5e4f] ...
	I0827 23:57:49.088650 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d1cf85ef119b2b508445f437ec6a72b6e9b5d923a65336c1472928c8afc5e4f"
	I0827 23:57:49.131893 1950495 logs.go:123] Gathering logs for kube-controller-manager [cb24367aaaecbaac8d5793d07ed4e77d1fe624807621b2d14b96f71eb8a3e865] ...
	I0827 23:57:49.131924 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb24367aaaecbaac8d5793d07ed4e77d1fe624807621b2d14b96f71eb8a3e865"
	I0827 23:57:49.204810 1950495 logs.go:123] Gathering logs for kube-controller-manager [3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0] ...
	I0827 23:57:49.204847 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0"
	I0827 23:57:49.260954 1950495 logs.go:123] Gathering logs for kindnet [abde7dff43fa43bf43a929f61cf84f0eb9f24c56d99b9aa35541d66d431bc5fd] ...
	I0827 23:57:49.260991 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abde7dff43fa43bf43a929f61cf84f0eb9f24c56d99b9aa35541d66d431bc5fd"
	I0827 23:57:49.304453 1950495 logs.go:123] Gathering logs for storage-provisioner [2d3fde405c4f70c241a237307888dbc766233bc3a92b248405db0a2742e2a707] ...
	I0827 23:57:49.304484 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3fde405c4f70c241a237307888dbc766233bc3a92b248405db0a2742e2a707"
	I0827 23:57:49.344361 1950495 logs.go:123] Gathering logs for describe nodes ...
	I0827 23:57:49.344481 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 23:57:51.982014 1950495 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0827 23:57:51.989673 1950495 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0827 23:57:51.990817 1950495 api_server.go:141] control plane version: v1.31.0
	I0827 23:57:51.990843 1950495 api_server.go:131] duration metric: took 4.139600514s to wait for apiserver health ...
	I0827 23:57:51.990852 1950495 system_pods.go:43] waiting for kube-system pods to appear ...
	I0827 23:57:51.990875 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0827 23:57:51.990938 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0827 23:57:52.049131 1950495 cri.go:89] found id: "fe84704b7ec75f3193ef02843fe7925f27be359c6442cfaf08b9f6331b56d51d"
	I0827 23:57:52.049156 1950495 cri.go:89] found id: "7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d"
	I0827 23:57:52.049161 1950495 cri.go:89] found id: ""
	I0827 23:57:52.049169 1950495 logs.go:276] 2 containers: [fe84704b7ec75f3193ef02843fe7925f27be359c6442cfaf08b9f6331b56d51d 7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d]
	I0827 23:57:52.049229 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.053399 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.057400 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0827 23:57:52.057472 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0827 23:57:52.105022 1950495 cri.go:89] found id: "443f90b5f48af310c2a0b29003f0492d900d5b22c61ca83e0398e565e9627e8b"
	I0827 23:57:52.105046 1950495 cri.go:89] found id: "1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b"
	I0827 23:57:52.105051 1950495 cri.go:89] found id: ""
	I0827 23:57:52.105059 1950495 logs.go:276] 2 containers: [443f90b5f48af310c2a0b29003f0492d900d5b22c61ca83e0398e565e9627e8b 1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b]
	I0827 23:57:52.105120 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.109280 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.113225 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0827 23:57:52.113325 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0827 23:57:52.160333 1950495 cri.go:89] found id: "9c280433fc76d28f921c042176fd9bb1616fb0c3c3e3b7fb9300c083169f45f5"
	I0827 23:57:52.160354 1950495 cri.go:89] found id: "5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8"
	I0827 23:57:52.160360 1950495 cri.go:89] found id: ""
	I0827 23:57:52.160394 1950495 logs.go:276] 2 containers: [9c280433fc76d28f921c042176fd9bb1616fb0c3c3e3b7fb9300c083169f45f5 5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8]
	I0827 23:57:52.160458 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.164527 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.169968 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0827 23:57:52.170050 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0827 23:57:52.214281 1950495 cri.go:89] found id: "0d1cf85ef119b2b508445f437ec6a72b6e9b5d923a65336c1472928c8afc5e4f"
	I0827 23:57:52.214303 1950495 cri.go:89] found id: "b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58"
	I0827 23:57:52.214308 1950495 cri.go:89] found id: ""
	I0827 23:57:52.214315 1950495 logs.go:276] 2 containers: [0d1cf85ef119b2b508445f437ec6a72b6e9b5d923a65336c1472928c8afc5e4f b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58]
	I0827 23:57:52.214393 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.218134 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.221618 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0827 23:57:52.221698 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0827 23:57:52.261454 1950495 cri.go:89] found id: "253f4b2cc40f2d9b5d44222a444d2a4bc7992f0997e349067086d6afbbb4630a"
	I0827 23:57:52.261476 1950495 cri.go:89] found id: "e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e"
	I0827 23:57:52.261481 1950495 cri.go:89] found id: ""
	I0827 23:57:52.261488 1950495 logs.go:276] 2 containers: [253f4b2cc40f2d9b5d44222a444d2a4bc7992f0997e349067086d6afbbb4630a e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e]
	I0827 23:57:52.261545 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.265540 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.269256 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0827 23:57:52.269344 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0827 23:57:52.317050 1950495 cri.go:89] found id: "cb24367aaaecbaac8d5793d07ed4e77d1fe624807621b2d14b96f71eb8a3e865"
	I0827 23:57:52.317071 1950495 cri.go:89] found id: "3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0"
	I0827 23:57:52.317076 1950495 cri.go:89] found id: ""
	I0827 23:57:52.317084 1950495 logs.go:276] 2 containers: [cb24367aaaecbaac8d5793d07ed4e77d1fe624807621b2d14b96f71eb8a3e865 3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0]
	I0827 23:57:52.317160 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.321071 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.324864 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0827 23:57:52.324937 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0827 23:57:52.384987 1950495 cri.go:89] found id: "abde7dff43fa43bf43a929f61cf84f0eb9f24c56d99b9aa35541d66d431bc5fd"
	I0827 23:57:52.385013 1950495 cri.go:89] found id: "9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03"
	I0827 23:57:52.385029 1950495 cri.go:89] found id: ""
	I0827 23:57:52.385037 1950495 logs.go:276] 2 containers: [abde7dff43fa43bf43a929f61cf84f0eb9f24c56d99b9aa35541d66d431bc5fd 9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03]
	I0827 23:57:52.385111 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.391141 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.401145 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0827 23:57:52.401345 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0827 23:57:52.469099 1950495 cri.go:89] found id: "2d3fde405c4f70c241a237307888dbc766233bc3a92b248405db0a2742e2a707"
	I0827 23:57:52.469119 1950495 cri.go:89] found id: "e300611d2e10fce610e01a8992253bd8224a55769ac78984311fc121390a2bd9"
	I0827 23:57:52.469125 1950495 cri.go:89] found id: ""
	I0827 23:57:52.469131 1950495 logs.go:276] 2 containers: [2d3fde405c4f70c241a237307888dbc766233bc3a92b248405db0a2742e2a707 e300611d2e10fce610e01a8992253bd8224a55769ac78984311fc121390a2bd9]
	I0827 23:57:52.469206 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.473146 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.477507 1950495 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0827 23:57:52.477605 1950495 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0827 23:57:52.519043 1950495 cri.go:89] found id: "37375e3ac2ad29c8f5d1122c62552e0a2c394a7ec2cca31caade50b5c68ec57f"
	I0827 23:57:52.519106 1950495 cri.go:89] found id: ""
	I0827 23:57:52.519127 1950495 logs.go:276] 1 containers: [37375e3ac2ad29c8f5d1122c62552e0a2c394a7ec2cca31caade50b5c68ec57f]
	I0827 23:57:52.519220 1950495 ssh_runner.go:195] Run: which crictl
	I0827 23:57:52.522986 1950495 logs.go:123] Gathering logs for dmesg ...
	I0827 23:57:52.523013 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 23:57:52.554957 1950495 logs.go:123] Gathering logs for describe nodes ...
	I0827 23:57:52.555044 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 23:57:52.683702 1950495 logs.go:123] Gathering logs for kube-apiserver [7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d] ...
	I0827 23:57:52.683730 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7eeb834b960d47d70b3aff513ee78b6fdb3981df22edacc56b7cb7696d92e39d"
	I0827 23:57:48.538154 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:51.036087 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:52.753178 1950495 logs.go:123] Gathering logs for kube-controller-manager [cb24367aaaecbaac8d5793d07ed4e77d1fe624807621b2d14b96f71eb8a3e865] ...
	I0827 23:57:52.753213 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb24367aaaecbaac8d5793d07ed4e77d1fe624807621b2d14b96f71eb8a3e865"
	I0827 23:57:52.826643 1950495 logs.go:123] Gathering logs for kube-controller-manager [3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0] ...
	I0827 23:57:52.826692 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3214b2bf63c83209e48dba86958860f95dc13d661d173621761feb70101be3c0"
	I0827 23:57:52.909463 1950495 logs.go:123] Gathering logs for kubernetes-dashboard [37375e3ac2ad29c8f5d1122c62552e0a2c394a7ec2cca31caade50b5c68ec57f] ...
	I0827 23:57:52.909502 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37375e3ac2ad29c8f5d1122c62552e0a2c394a7ec2cca31caade50b5c68ec57f"
	I0827 23:57:52.961438 1950495 logs.go:123] Gathering logs for containerd ...
	I0827 23:57:52.961468 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0827 23:57:53.033265 1950495 logs.go:123] Gathering logs for kube-apiserver [fe84704b7ec75f3193ef02843fe7925f27be359c6442cfaf08b9f6331b56d51d] ...
	I0827 23:57:53.033305 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fe84704b7ec75f3193ef02843fe7925f27be359c6442cfaf08b9f6331b56d51d"
	I0827 23:57:53.093820 1950495 logs.go:123] Gathering logs for storage-provisioner [2d3fde405c4f70c241a237307888dbc766233bc3a92b248405db0a2742e2a707] ...
	I0827 23:57:53.093856 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2d3fde405c4f70c241a237307888dbc766233bc3a92b248405db0a2742e2a707"
	I0827 23:57:53.149853 1950495 logs.go:123] Gathering logs for kubelet ...
	I0827 23:57:53.149882 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0827 23:57:53.228542 1950495 logs.go:123] Gathering logs for coredns [9c280433fc76d28f921c042176fd9bb1616fb0c3c3e3b7fb9300c083169f45f5] ...
	I0827 23:57:53.228586 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c280433fc76d28f921c042176fd9bb1616fb0c3c3e3b7fb9300c083169f45f5"
	I0827 23:57:53.277415 1950495 logs.go:123] Gathering logs for kube-scheduler [0d1cf85ef119b2b508445f437ec6a72b6e9b5d923a65336c1472928c8afc5e4f] ...
	I0827 23:57:53.277445 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d1cf85ef119b2b508445f437ec6a72b6e9b5d923a65336c1472928c8afc5e4f"
	I0827 23:57:53.326821 1950495 logs.go:123] Gathering logs for kube-scheduler [b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58] ...
	I0827 23:57:53.326856 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b713cc1b1e1c0f2b792f604b17c7bb3b170955cb846e1ad3628105edd6e93a58"
	I0827 23:57:53.385208 1950495 logs.go:123] Gathering logs for kindnet [abde7dff43fa43bf43a929f61cf84f0eb9f24c56d99b9aa35541d66d431bc5fd] ...
	I0827 23:57:53.385240 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 abde7dff43fa43bf43a929f61cf84f0eb9f24c56d99b9aa35541d66d431bc5fd"
	I0827 23:57:53.430516 1950495 logs.go:123] Gathering logs for storage-provisioner [e300611d2e10fce610e01a8992253bd8224a55769ac78984311fc121390a2bd9] ...
	I0827 23:57:53.430544 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e300611d2e10fce610e01a8992253bd8224a55769ac78984311fc121390a2bd9"
	I0827 23:57:53.472875 1950495 logs.go:123] Gathering logs for container status ...
	I0827 23:57:53.472905 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 23:57:53.546843 1950495 logs.go:123] Gathering logs for etcd [443f90b5f48af310c2a0b29003f0492d900d5b22c61ca83e0398e565e9627e8b] ...
	I0827 23:57:53.546892 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 443f90b5f48af310c2a0b29003f0492d900d5b22c61ca83e0398e565e9627e8b"
	I0827 23:57:53.604511 1950495 logs.go:123] Gathering logs for etcd [1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b] ...
	I0827 23:57:53.604543 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1865b4877751d1fa034595f97e7c1192cb707868710e8d39e41ff972a250867b"
	I0827 23:57:53.653756 1950495 logs.go:123] Gathering logs for coredns [5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8] ...
	I0827 23:57:53.653791 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a88bb9f5a2bd3906f3509ae2cc90d1a6624ce428bde75441f3e35a60e62e8c8"
	I0827 23:57:53.709950 1950495 logs.go:123] Gathering logs for kube-proxy [253f4b2cc40f2d9b5d44222a444d2a4bc7992f0997e349067086d6afbbb4630a] ...
	I0827 23:57:53.709989 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 253f4b2cc40f2d9b5d44222a444d2a4bc7992f0997e349067086d6afbbb4630a"
	I0827 23:57:53.748979 1950495 logs.go:123] Gathering logs for kube-proxy [e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e] ...
	I0827 23:57:53.749048 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e9b0c909017017b75549291c3809aa4dcbec2431b93ad52fcc7b181d5a9a9c9e"
	I0827 23:57:53.788655 1950495 logs.go:123] Gathering logs for kindnet [9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03] ...
	I0827 23:57:53.788685 1950495 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a811044a4368c2294519477e32cf05e41942851f970c19ca2aaa5bb256f7f03"
	I0827 23:57:56.337085 1950495 system_pods.go:59] 9 kube-system pods found
	I0827 23:57:56.337123 1950495 system_pods.go:61] "coredns-6f6b679f8f-2qvsz" [dd70338e-f861-42f2-a0fe-55ad9ea31d23] Running
	I0827 23:57:56.337130 1950495 system_pods.go:61] "etcd-no-preload-710826" [b4f98313-2367-4bf1-b9be-c6c0ae5f20a3] Running
	I0827 23:57:56.337135 1950495 system_pods.go:61] "kindnet-vkvrh" [db81af53-fe46-46b2-985c-f059436a0204] Running
	I0827 23:57:56.337140 1950495 system_pods.go:61] "kube-apiserver-no-preload-710826" [514b71b8-b0ca-4339-bf3e-477529bce61d] Running
	I0827 23:57:56.337163 1950495 system_pods.go:61] "kube-controller-manager-no-preload-710826" [3bca269c-e9de-4acb-a83b-5598f0ecd1c5] Running
	I0827 23:57:56.337172 1950495 system_pods.go:61] "kube-proxy-n47gz" [3da250b4-9476-4c7d-b1c9-131e542c411a] Running
	I0827 23:57:56.337176 1950495 system_pods.go:61] "kube-scheduler-no-preload-710826" [a7203ec3-15c3-4af5-b5b0-7f20eab134a8] Running
	I0827 23:57:56.337183 1950495 system_pods.go:61] "metrics-server-6867b74b74-shq79" [f0d7e553-7cd8-4b83-bbe5-41bd578df9d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0827 23:57:56.337188 1950495 system_pods.go:61] "storage-provisioner" [0138b06b-b76e-4f1f-8c6b-cf6913e91522] Running
	I0827 23:57:56.337201 1950495 system_pods.go:74] duration metric: took 4.346341908s to wait for pod list to return data ...
	I0827 23:57:56.337217 1950495 default_sa.go:34] waiting for default service account to be created ...
	I0827 23:57:56.339617 1950495 default_sa.go:45] found service account: "default"
	I0827 23:57:56.339646 1950495 default_sa.go:55] duration metric: took 2.42037ms for default service account to be created ...
	I0827 23:57:56.339657 1950495 system_pods.go:116] waiting for k8s-apps to be running ...
	I0827 23:57:56.347180 1950495 system_pods.go:86] 9 kube-system pods found
	I0827 23:57:56.347214 1950495 system_pods.go:89] "coredns-6f6b679f8f-2qvsz" [dd70338e-f861-42f2-a0fe-55ad9ea31d23] Running
	I0827 23:57:56.347221 1950495 system_pods.go:89] "etcd-no-preload-710826" [b4f98313-2367-4bf1-b9be-c6c0ae5f20a3] Running
	I0827 23:57:56.347227 1950495 system_pods.go:89] "kindnet-vkvrh" [db81af53-fe46-46b2-985c-f059436a0204] Running
	I0827 23:57:56.347231 1950495 system_pods.go:89] "kube-apiserver-no-preload-710826" [514b71b8-b0ca-4339-bf3e-477529bce61d] Running
	I0827 23:57:56.347236 1950495 system_pods.go:89] "kube-controller-manager-no-preload-710826" [3bca269c-e9de-4acb-a83b-5598f0ecd1c5] Running
	I0827 23:57:56.347240 1950495 system_pods.go:89] "kube-proxy-n47gz" [3da250b4-9476-4c7d-b1c9-131e542c411a] Running
	I0827 23:57:56.347244 1950495 system_pods.go:89] "kube-scheduler-no-preload-710826" [a7203ec3-15c3-4af5-b5b0-7f20eab134a8] Running
	I0827 23:57:56.347251 1950495 system_pods.go:89] "metrics-server-6867b74b74-shq79" [f0d7e553-7cd8-4b83-bbe5-41bd578df9d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0827 23:57:56.347258 1950495 system_pods.go:89] "storage-provisioner" [0138b06b-b76e-4f1f-8c6b-cf6913e91522] Running
	I0827 23:57:56.347275 1950495 system_pods.go:126] duration metric: took 7.610714ms to wait for k8s-apps to be running ...
	I0827 23:57:56.347289 1950495 system_svc.go:44] waiting for kubelet service to be running ....
	I0827 23:57:56.347351 1950495 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:57:56.359913 1950495 system_svc.go:56] duration metric: took 12.615184ms WaitForService to wait for kubelet
	I0827 23:57:56.359943 1950495 kubeadm.go:582] duration metric: took 4m30.994225227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0827 23:57:56.359969 1950495 node_conditions.go:102] verifying NodePressure condition ...
	I0827 23:57:56.363551 1950495 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0827 23:57:56.363590 1950495 node_conditions.go:123] node cpu capacity is 2
	I0827 23:57:56.363631 1950495 node_conditions.go:105] duration metric: took 3.630945ms to run NodePressure ...
	I0827 23:57:56.363660 1950495 start.go:241] waiting for startup goroutines ...
	I0827 23:57:56.363675 1950495 start.go:246] waiting for cluster config update ...
	I0827 23:57:56.363690 1950495 start.go:255] writing updated cluster config ...
	I0827 23:57:56.364094 1950495 ssh_runner.go:195] Run: rm -f paused
	I0827 23:57:56.431415 1950495 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0827 23:57:56.433832 1950495 out.go:177] * Done! kubectl is now configured to use "no-preload-710826" cluster and "default" namespace by default
	I0827 23:57:53.039962 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:55.050692 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:57:57.541257 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:58:00.089363 1945499 pod_ready.go:103] pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace has status "Ready":"False"
	I0827 23:58:00.543555 1945499 pod_ready.go:82] duration metric: took 4m0.014542084s for pod "metrics-server-9975d5f86-hrfcg" in "kube-system" namespace to be "Ready" ...
	E0827 23:58:00.543589 1945499 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0827 23:58:00.543600 1945499 pod_ready.go:39] duration metric: took 5m30.35086527s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0827 23:58:00.543615 1945499 api_server.go:52] waiting for apiserver process to appear ...
	I0827 23:58:00.543647 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0827 23:58:00.543719 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0827 23:58:00.597945 1945499 cri.go:89] found id: "236ee37eeeb99bd1460f867c4e7fe387aa435f0c3062f69ba966a2912dcefd98"
	I0827 23:58:00.597976 1945499 cri.go:89] found id: "8ad8c60d925d8c127982d6c494b2944705246a4e1f900b216029c075b40579c3"
	I0827 23:58:00.597982 1945499 cri.go:89] found id: ""
	I0827 23:58:00.597990 1945499 logs.go:276] 2 containers: [236ee37eeeb99bd1460f867c4e7fe387aa435f0c3062f69ba966a2912dcefd98 8ad8c60d925d8c127982d6c494b2944705246a4e1f900b216029c075b40579c3]
	I0827 23:58:00.598054 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.602301 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.606401 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0827 23:58:00.606492 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0827 23:58:00.652901 1945499 cri.go:89] found id: "b840f973e99b93adc44783c2e2d337691055b2010c919612e3dadc0ed1482689"
	I0827 23:58:00.652924 1945499 cri.go:89] found id: "ec54c116a9331e1e0344c99a787d2410df9e7415035a80a4727091fdd518c6d9"
	I0827 23:58:00.652929 1945499 cri.go:89] found id: ""
	I0827 23:58:00.652937 1945499 logs.go:276] 2 containers: [b840f973e99b93adc44783c2e2d337691055b2010c919612e3dadc0ed1482689 ec54c116a9331e1e0344c99a787d2410df9e7415035a80a4727091fdd518c6d9]
	I0827 23:58:00.653001 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.657090 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.660704 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0827 23:58:00.660777 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0827 23:58:00.699407 1945499 cri.go:89] found id: "ead4d00fa7425edec7434788632e2318593bc3569ef3831b4dc8a50390cfcef7"
	I0827 23:58:00.699433 1945499 cri.go:89] found id: "3c0492b681bf18809e9a23ab9a173d2d830618a5a4009118054601e45bfe2d62"
	I0827 23:58:00.699438 1945499 cri.go:89] found id: ""
	I0827 23:58:00.699446 1945499 logs.go:276] 2 containers: [ead4d00fa7425edec7434788632e2318593bc3569ef3831b4dc8a50390cfcef7 3c0492b681bf18809e9a23ab9a173d2d830618a5a4009118054601e45bfe2d62]
	I0827 23:58:00.699516 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.703499 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.707751 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0827 23:58:00.707839 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0827 23:58:00.749347 1945499 cri.go:89] found id: "1a994ea8ba82f081ccc9e2ac0e483d50f83e2ed42aa614b79c8caa2103abf267"
	I0827 23:58:00.749421 1945499 cri.go:89] found id: "cb5a0544025d9eeba2b0613deeb98000ece1fd8d335ccd8307d6631b0c79b808"
	I0827 23:58:00.749433 1945499 cri.go:89] found id: ""
	I0827 23:58:00.749442 1945499 logs.go:276] 2 containers: [1a994ea8ba82f081ccc9e2ac0e483d50f83e2ed42aa614b79c8caa2103abf267 cb5a0544025d9eeba2b0613deeb98000ece1fd8d335ccd8307d6631b0c79b808]
	I0827 23:58:00.749515 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.753278 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.756794 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0827 23:58:00.756926 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0827 23:58:00.798299 1945499 cri.go:89] found id: "b794569e1af8dd0a1e24a3b37ce65bee8173206424ae64ce50ae15299bc2ce1e"
	I0827 23:58:00.798331 1945499 cri.go:89] found id: "afa3d5bad6b52464ebc366db825a3bae7e5c7708a260053326c71f3b698cb205"
	I0827 23:58:00.798337 1945499 cri.go:89] found id: ""
	I0827 23:58:00.798344 1945499 logs.go:276] 2 containers: [b794569e1af8dd0a1e24a3b37ce65bee8173206424ae64ce50ae15299bc2ce1e afa3d5bad6b52464ebc366db825a3bae7e5c7708a260053326c71f3b698cb205]
	I0827 23:58:00.798412 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.802454 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.806186 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0827 23:58:00.806284 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0827 23:58:00.849117 1945499 cri.go:89] found id: "b80f35939db8aaeb12827ab1e612ae18e95c0c024e797cd5b1ea4629fe4a70ea"
	I0827 23:58:00.849140 1945499 cri.go:89] found id: "30ef2c8817f233bf500df3120c006454ceca974e44a9d5b1ccb0d2f184c7a618"
	I0827 23:58:00.849145 1945499 cri.go:89] found id: ""
	I0827 23:58:00.849153 1945499 logs.go:276] 2 containers: [b80f35939db8aaeb12827ab1e612ae18e95c0c024e797cd5b1ea4629fe4a70ea 30ef2c8817f233bf500df3120c006454ceca974e44a9d5b1ccb0d2f184c7a618]
	I0827 23:58:00.849226 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.852721 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.856151 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0827 23:58:00.856227 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0827 23:58:00.899038 1945499 cri.go:89] found id: "1cbb985b30629df7e58845adea1be58296d1c4b309b10502e97ae37f80e864fd"
	I0827 23:58:00.899063 1945499 cri.go:89] found id: "575a6ee419e7fe10299e33d8b97f8c2598ad91a8fea4bdd2f0dd5e2db16ada9c"
	I0827 23:58:00.899068 1945499 cri.go:89] found id: ""
	I0827 23:58:00.899075 1945499 logs.go:276] 2 containers: [1cbb985b30629df7e58845adea1be58296d1c4b309b10502e97ae37f80e864fd 575a6ee419e7fe10299e33d8b97f8c2598ad91a8fea4bdd2f0dd5e2db16ada9c]
	I0827 23:58:00.899130 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.903019 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.907211 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0827 23:58:00.907319 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0827 23:58:00.953896 1945499 cri.go:89] found id: "d6670465175a28a741e1eadfb9ec891d36c454066af259c2ba1292e1c2d606d9"
	I0827 23:58:00.953964 1945499 cri.go:89] found id: ""
	I0827 23:58:00.953978 1945499 logs.go:276] 1 containers: [d6670465175a28a741e1eadfb9ec891d36c454066af259c2ba1292e1c2d606d9]
	I0827 23:58:00.954053 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:00.958271 1945499 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0827 23:58:00.958391 1945499 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0827 23:58:01.010042 1945499 cri.go:89] found id: "42597d6ccc6c90213fb2f50464c1373d136df2cc9496367789b03fba8d5f25bf"
	I0827 23:58:01.010071 1945499 cri.go:89] found id: "592dbdd737e878b0fe0ea4cea6b72f6e640f9c434b17d5af3d98a6c70210e42c"
	I0827 23:58:01.010077 1945499 cri.go:89] found id: ""
	I0827 23:58:01.010085 1945499 logs.go:276] 2 containers: [42597d6ccc6c90213fb2f50464c1373d136df2cc9496367789b03fba8d5f25bf 592dbdd737e878b0fe0ea4cea6b72f6e640f9c434b17d5af3d98a6c70210e42c]
	I0827 23:58:01.010157 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:01.015859 1945499 ssh_runner.go:195] Run: which crictl
	I0827 23:58:01.027889 1945499 logs.go:123] Gathering logs for kubelet ...
	I0827 23:58:01.027965 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0827 23:58:01.085733 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:29 old-k8s-version-394049 kubelet[661]: E0827 23:52:29.904499     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.085965 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:29 old-k8s-version-394049 kubelet[661]: E0827 23:52:29.905231     661 reflector.go:138] object-"kube-system"/"coredns-token-h2fzw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-h2fzw" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.086179 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:29 old-k8s-version-394049 kubelet[661]: E0827 23:52:29.905317     661 reflector.go:138] object-"kube-system"/"kindnet-token-nhzrn": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-nhzrn" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.089944 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:30 old-k8s-version-394049 kubelet[661]: E0827 23:52:30.162229     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-4fdqz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-4fdqz" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.090154 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:30 old-k8s-version-394049 kubelet[661]: E0827 23:52:30.162511     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.090397 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:30 old-k8s-version-394049 kubelet[661]: E0827 23:52:30.162590     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-dhs5r": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-dhs5r" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.090605 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:30 old-k8s-version-394049 kubelet[661]: E0827 23:52:30.162658     661 reflector.go:138] object-"default"/"default-token-bdzp7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-bdzp7" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.090825 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:30 old-k8s-version-394049 kubelet[661]: E0827 23:52:30.162723     661 reflector.go:138] object-"kube-system"/"metrics-server-token-nrslw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-nrslw" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.099068 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:34 old-k8s-version-394049 kubelet[661]: E0827 23:52:34.053654     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0827 23:58:01.099263 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:34 old-k8s-version-394049 kubelet[661]: E0827 23:52:34.649560     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.102070 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:46 old-k8s-version-394049 kubelet[661]: E0827 23:52:46.414539     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0827 23:58:01.102400 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:48 old-k8s-version-394049 kubelet[661]: E0827 23:52:48.165741     661 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-sw62f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-sw62f" is forbidden: User "system:node:old-k8s-version-394049" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-394049' and this object
	W0827 23:58:01.105519 1945499 logs.go:138] Found kubelet problem: Aug 27 23:52:59 old-k8s-version-394049 kubelet[661]: E0827 23:52:59.405677     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.106456 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:00 old-k8s-version-394049 kubelet[661]: E0827 23:53:00.763341     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.106810 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:01 old-k8s-version-394049 kubelet[661]: E0827 23:53:01.767516     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.107142 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:02 old-k8s-version-394049 kubelet[661]: E0827 23:53:02.770141     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.109947 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:10 old-k8s-version-394049 kubelet[661]: E0827 23:53:10.412938     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0827 23:58:01.110536 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:15 old-k8s-version-394049 kubelet[661]: E0827 23:53:15.810709     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.110863 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:21 old-k8s-version-394049 kubelet[661]: E0827 23:53:21.214756     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.111048 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:23 old-k8s-version-394049 kubelet[661]: E0827 23:53:23.403844     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.111379 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:32 old-k8s-version-394049 kubelet[661]: E0827 23:53:32.403201     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.111565 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:35 old-k8s-version-394049 kubelet[661]: E0827 23:53:35.404624     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.111882 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:47 old-k8s-version-394049 kubelet[661]: E0827 23:53:47.404977     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.112337 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:47 old-k8s-version-394049 kubelet[661]: E0827 23:53:47.913818     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.112668 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:51 old-k8s-version-394049 kubelet[661]: E0827 23:53:51.214411     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.115132 1945499 logs.go:138] Found kubelet problem: Aug 27 23:53:59 old-k8s-version-394049 kubelet[661]: E0827 23:53:59.420594     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0827 23:58:01.115465 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:03 old-k8s-version-394049 kubelet[661]: E0827 23:54:03.408451     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.115649 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:13 old-k8s-version-394049 kubelet[661]: E0827 23:54:13.404006     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.115974 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:16 old-k8s-version-394049 kubelet[661]: E0827 23:54:16.404599     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.116158 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:26 old-k8s-version-394049 kubelet[661]: E0827 23:54:26.403617     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.116755 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:32 old-k8s-version-394049 kubelet[661]: E0827 23:54:32.101869     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.116939 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:40 old-k8s-version-394049 kubelet[661]: E0827 23:54:40.403413     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.117265 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:41 old-k8s-version-394049 kubelet[661]: E0827 23:54:41.214563     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.117581 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:54 old-k8s-version-394049 kubelet[661]: E0827 23:54:54.403985     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.117777 1945499 logs.go:138] Found kubelet problem: Aug 27 23:54:54 old-k8s-version-394049 kubelet[661]: E0827 23:54:54.404224     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.117960 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:05 old-k8s-version-394049 kubelet[661]: E0827 23:55:05.403501     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.118284 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:08 old-k8s-version-394049 kubelet[661]: E0827 23:55:08.403160     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.118468 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:17 old-k8s-version-394049 kubelet[661]: E0827 23:55:17.404616     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.118793 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:21 old-k8s-version-394049 kubelet[661]: E0827 23:55:21.403616     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.121238 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:29 old-k8s-version-394049 kubelet[661]: E0827 23:55:29.412117     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0827 23:58:01.121567 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:34 old-k8s-version-394049 kubelet[661]: E0827 23:55:34.403137     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.121750 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:43 old-k8s-version-394049 kubelet[661]: E0827 23:55:43.403788     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.122077 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:49 old-k8s-version-394049 kubelet[661]: E0827 23:55:49.404105     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.122261 1945499 logs.go:138] Found kubelet problem: Aug 27 23:55:58 old-k8s-version-394049 kubelet[661]: E0827 23:55:58.403447     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.122851 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:01 old-k8s-version-394049 kubelet[661]: E0827 23:56:01.416088     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.123034 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:09 old-k8s-version-394049 kubelet[661]: E0827 23:56:09.404354     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.123360 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:11 old-k8s-version-394049 kubelet[661]: E0827 23:56:11.216894     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.123547 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:23 old-k8s-version-394049 kubelet[661]: E0827 23:56:23.403678     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.123877 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:24 old-k8s-version-394049 kubelet[661]: E0827 23:56:24.403435     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.124062 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:38 old-k8s-version-394049 kubelet[661]: E0827 23:56:38.403581     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.124534 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:39 old-k8s-version-394049 kubelet[661]: E0827 23:56:39.403644     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.124726 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:50 old-k8s-version-394049 kubelet[661]: E0827 23:56:50.403803     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.125058 1945499 logs.go:138] Found kubelet problem: Aug 27 23:56:54 old-k8s-version-394049 kubelet[661]: E0827 23:56:54.403100     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.125243 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:03 old-k8s-version-394049 kubelet[661]: E0827 23:57:03.403600     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.125572 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:09 old-k8s-version-394049 kubelet[661]: E0827 23:57:09.403474     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.125757 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:17 old-k8s-version-394049 kubelet[661]: E0827 23:57:17.403665     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.126082 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:22 old-k8s-version-394049 kubelet[661]: E0827 23:57:22.403246     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.126268 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:30 old-k8s-version-394049 kubelet[661]: E0827 23:57:30.403646     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.126596 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:36 old-k8s-version-394049 kubelet[661]: E0827 23:57:36.403163     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.126782 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:43 old-k8s-version-394049 kubelet[661]: E0827 23:57:43.403668     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:01.127136 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:51 old-k8s-version-394049 kubelet[661]: E0827 23:57:51.403157     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:01.127321 1945499 logs.go:138] Found kubelet problem: Aug 27 23:57:57 old-k8s-version-394049 kubelet[661]: E0827 23:57:57.404181     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0827 23:58:01.127331 1945499 logs.go:123] Gathering logs for etcd [ec54c116a9331e1e0344c99a787d2410df9e7415035a80a4727091fdd518c6d9] ...
	I0827 23:58:01.127346 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec54c116a9331e1e0344c99a787d2410df9e7415035a80a4727091fdd518c6d9"
	I0827 23:58:01.173990 1945499 logs.go:123] Gathering logs for kube-controller-manager [30ef2c8817f233bf500df3120c006454ceca974e44a9d5b1ccb0d2f184c7a618] ...
	I0827 23:58:01.174026 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30ef2c8817f233bf500df3120c006454ceca974e44a9d5b1ccb0d2f184c7a618"
	I0827 23:58:01.230616 1945499 logs.go:123] Gathering logs for kindnet [1cbb985b30629df7e58845adea1be58296d1c4b309b10502e97ae37f80e864fd] ...
	I0827 23:58:01.230650 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cbb985b30629df7e58845adea1be58296d1c4b309b10502e97ae37f80e864fd"
	I0827 23:58:01.275585 1945499 logs.go:123] Gathering logs for storage-provisioner [592dbdd737e878b0fe0ea4cea6b72f6e640f9c434b17d5af3d98a6c70210e42c] ...
	I0827 23:58:01.275621 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 592dbdd737e878b0fe0ea4cea6b72f6e640f9c434b17d5af3d98a6c70210e42c"
	I0827 23:58:01.321719 1945499 logs.go:123] Gathering logs for kubernetes-dashboard [d6670465175a28a741e1eadfb9ec891d36c454066af259c2ba1292e1c2d606d9] ...
	I0827 23:58:01.321752 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d6670465175a28a741e1eadfb9ec891d36c454066af259c2ba1292e1c2d606d9"
	I0827 23:58:01.367943 1945499 logs.go:123] Gathering logs for dmesg ...
	I0827 23:58:01.367974 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0827 23:58:01.385565 1945499 logs.go:123] Gathering logs for describe nodes ...
	I0827 23:58:01.385595 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0827 23:58:01.542426 1945499 logs.go:123] Gathering logs for etcd [b840f973e99b93adc44783c2e2d337691055b2010c919612e3dadc0ed1482689] ...
	I0827 23:58:01.542461 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b840f973e99b93adc44783c2e2d337691055b2010c919612e3dadc0ed1482689"
	I0827 23:58:01.586413 1945499 logs.go:123] Gathering logs for kube-scheduler [cb5a0544025d9eeba2b0613deeb98000ece1fd8d335ccd8307d6631b0c79b808] ...
	I0827 23:58:01.586445 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb5a0544025d9eeba2b0613deeb98000ece1fd8d335ccd8307d6631b0c79b808"
	I0827 23:58:01.634391 1945499 logs.go:123] Gathering logs for kube-proxy [b794569e1af8dd0a1e24a3b37ce65bee8173206424ae64ce50ae15299bc2ce1e] ...
	I0827 23:58:01.634426 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b794569e1af8dd0a1e24a3b37ce65bee8173206424ae64ce50ae15299bc2ce1e"
	I0827 23:58:01.674602 1945499 logs.go:123] Gathering logs for kube-controller-manager [b80f35939db8aaeb12827ab1e612ae18e95c0c024e797cd5b1ea4629fe4a70ea] ...
	I0827 23:58:01.674642 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b80f35939db8aaeb12827ab1e612ae18e95c0c024e797cd5b1ea4629fe4a70ea"
	I0827 23:58:01.756052 1945499 logs.go:123] Gathering logs for kindnet [575a6ee419e7fe10299e33d8b97f8c2598ad91a8fea4bdd2f0dd5e2db16ada9c] ...
	I0827 23:58:01.756149 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 575a6ee419e7fe10299e33d8b97f8c2598ad91a8fea4bdd2f0dd5e2db16ada9c"
	I0827 23:58:01.829677 1945499 logs.go:123] Gathering logs for containerd ...
	I0827 23:58:01.829716 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0827 23:58:01.893566 1945499 logs.go:123] Gathering logs for kube-apiserver [236ee37eeeb99bd1460f867c4e7fe387aa435f0c3062f69ba966a2912dcefd98] ...
	I0827 23:58:01.893606 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 236ee37eeeb99bd1460f867c4e7fe387aa435f0c3062f69ba966a2912dcefd98"
	I0827 23:58:01.975220 1945499 logs.go:123] Gathering logs for coredns [ead4d00fa7425edec7434788632e2318593bc3569ef3831b4dc8a50390cfcef7] ...
	I0827 23:58:01.975254 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ead4d00fa7425edec7434788632e2318593bc3569ef3831b4dc8a50390cfcef7"
	I0827 23:58:02.039338 1945499 logs.go:123] Gathering logs for coredns [3c0492b681bf18809e9a23ab9a173d2d830618a5a4009118054601e45bfe2d62] ...
	I0827 23:58:02.039366 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c0492b681bf18809e9a23ab9a173d2d830618a5a4009118054601e45bfe2d62"
	I0827 23:58:02.085786 1945499 logs.go:123] Gathering logs for kube-scheduler [1a994ea8ba82f081ccc9e2ac0e483d50f83e2ed42aa614b79c8caa2103abf267] ...
	I0827 23:58:02.085819 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a994ea8ba82f081ccc9e2ac0e483d50f83e2ed42aa614b79c8caa2103abf267"
	I0827 23:58:02.132044 1945499 logs.go:123] Gathering logs for kube-proxy [afa3d5bad6b52464ebc366db825a3bae7e5c7708a260053326c71f3b698cb205] ...
	I0827 23:58:02.132073 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 afa3d5bad6b52464ebc366db825a3bae7e5c7708a260053326c71f3b698cb205"
	I0827 23:58:02.179174 1945499 logs.go:123] Gathering logs for storage-provisioner [42597d6ccc6c90213fb2f50464c1373d136df2cc9496367789b03fba8d5f25bf] ...
	I0827 23:58:02.179207 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42597d6ccc6c90213fb2f50464c1373d136df2cc9496367789b03fba8d5f25bf"
	I0827 23:58:02.220200 1945499 logs.go:123] Gathering logs for container status ...
	I0827 23:58:02.220234 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0827 23:58:02.269966 1945499 logs.go:123] Gathering logs for kube-apiserver [8ad8c60d925d8c127982d6c494b2944705246a4e1f900b216029c075b40579c3] ...
	I0827 23:58:02.269997 1945499 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ad8c60d925d8c127982d6c494b2944705246a4e1f900b216029c075b40579c3"
	I0827 23:58:02.326500 1945499 out.go:358] Setting ErrFile to fd 2...
	I0827 23:58:02.326532 1945499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0827 23:58:02.326594 1945499 out.go:270] X Problems detected in kubelet:
	W0827 23:58:02.326608 1945499 out.go:270]   Aug 27 23:57:30 old-k8s-version-394049 kubelet[661]: E0827 23:57:30.403646     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:02.326640 1945499 out.go:270]   Aug 27 23:57:36 old-k8s-version-394049 kubelet[661]: E0827 23:57:36.403163     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:02.326647 1945499 out.go:270]   Aug 27 23:57:43 old-k8s-version-394049 kubelet[661]: E0827 23:57:43.403668     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0827 23:58:02.326656 1945499 out.go:270]   Aug 27 23:57:51 old-k8s-version-394049 kubelet[661]: E0827 23:57:51.403157     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	W0827 23:58:02.326661 1945499 out.go:270]   Aug 27 23:57:57 old-k8s-version-394049 kubelet[661]: E0827 23:57:57.404181     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0827 23:58:02.326666 1945499 out.go:358] Setting ErrFile to fd 2...
	I0827 23:58:02.326675 1945499 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:58:12.327421 1945499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:58:12.341561 1945499 api_server.go:72] duration metric: took 6m0.665672398s to wait for apiserver process to appear ...
	I0827 23:58:12.341586 1945499 api_server.go:88] waiting for apiserver healthz status ...
	I0827 23:58:12.344259 1945499 out.go:201] 
	W0827 23:58:12.346377 1945499 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0827 23:58:12.346407 1945499 out.go:270] * 
	W0827 23:58:12.347367 1945499 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0827 23:58:12.348889 1945499 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	8db6aabd38edb       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   a61bce0b122c1       dashboard-metrics-scraper-8d5bb5db8-72r62
	d6670465175a2       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   bf5ab9c602753       kubernetes-dashboard-cd95d586-27b8v
	fa7c8d0de2463       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   4e63e9657f88e       busybox
	1cbb985b30629       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   e4e313ae645da       kindnet-bv2q4
	42597d6ccc6c9       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         1                   0625227132cff       storage-provisioner
	b794569e1af8d       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   ac499c672a9ee       kube-proxy-d84wl
	ead4d00fa7425       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   8294078efaf22       coredns-74ff55c5b-fbhfc
	236ee37eeeb99       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   fb9b459348351       kube-apiserver-old-k8s-version-394049
	b840f973e99b9       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   e2609e0873d20       etcd-old-k8s-version-394049
	b80f35939db8a       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   bd5c3d955e689       kube-controller-manager-old-k8s-version-394049
	1a994ea8ba82f       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   0a61a3ff77d27       kube-scheduler-old-k8s-version-394049
	a9c1a0de96f77       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   4729c39131a4e       busybox
	3c0492b681bf1       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   6f133dd257237       coredns-74ff55c5b-fbhfc
	575a6ee419e7f       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   7a21e03aeffba       kindnet-bv2q4
	592dbdd737e87       ba04bb24b9575       8 minutes ago       Exited              storage-provisioner         0                   eceeb9973fc54       storage-provisioner
	afa3d5bad6b52       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   eefaafaaca1a1       kube-proxy-d84wl
	ec54c116a9331       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   538cb8621555a       etcd-old-k8s-version-394049
	cb5a0544025d9       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   5d92486874e6f       kube-scheduler-old-k8s-version-394049
	8ad8c60d925d8       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   4d13c7269a18e       kube-apiserver-old-k8s-version-394049
	30ef2c8817f23       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   23a956e04f534       kube-controller-manager-old-k8s-version-394049
	
	
	==> containerd <==
	Aug 27 23:54:31 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:54:31.426377804Z" level=info msg="CreateContainer within sandbox \"a61bce0b122c1fc2f10125f6d39974289bda8c2a617ba6a1186aef0df26a9eb8\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"54a945e4158d9e87046508da0d6414a56d789ee13f6761696bb16c16c1551cdd\""
	Aug 27 23:54:31 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:54:31.427049404Z" level=info msg="StartContainer for \"54a945e4158d9e87046508da0d6414a56d789ee13f6761696bb16c16c1551cdd\""
	Aug 27 23:54:31 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:54:31.497311849Z" level=info msg="StartContainer for \"54a945e4158d9e87046508da0d6414a56d789ee13f6761696bb16c16c1551cdd\" returns successfully"
	Aug 27 23:54:31 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:54:31.526387926Z" level=info msg="shim disconnected" id=54a945e4158d9e87046508da0d6414a56d789ee13f6761696bb16c16c1551cdd namespace=k8s.io
	Aug 27 23:54:31 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:54:31.526447756Z" level=warning msg="cleaning up after shim disconnected" id=54a945e4158d9e87046508da0d6414a56d789ee13f6761696bb16c16c1551cdd namespace=k8s.io
	Aug 27 23:54:31 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:54:31.526459621Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 27 23:54:32 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:54:32.104431191Z" level=info msg="RemoveContainer for \"0859ea30c0d1fa3bfed03034cf60698088d09096a28c5e7a0ac8bce9e25a8a31\""
	Aug 27 23:54:32 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:54:32.122377048Z" level=info msg="RemoveContainer for \"0859ea30c0d1fa3bfed03034cf60698088d09096a28c5e7a0ac8bce9e25a8a31\" returns successfully"
	Aug 27 23:55:29 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:55:29.404709678Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 27 23:55:29 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:55:29.410144964Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Aug 27 23:55:29 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:55:29.411643041Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 27 23:55:29 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:55:29.411749270Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 27 23:56:00 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:56:00.435966521Z" level=info msg="CreateContainer within sandbox \"a61bce0b122c1fc2f10125f6d39974289bda8c2a617ba6a1186aef0df26a9eb8\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Aug 27 23:56:00 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:56:00.476038042Z" level=info msg="CreateContainer within sandbox \"a61bce0b122c1fc2f10125f6d39974289bda8c2a617ba6a1186aef0df26a9eb8\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"8db6aabd38edbe1e2f1270112a398c6712fa7e260a2d74fdad6fda93f49d5f89\""
	Aug 27 23:56:00 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:56:00.480412092Z" level=info msg="StartContainer for \"8db6aabd38edbe1e2f1270112a398c6712fa7e260a2d74fdad6fda93f49d5f89\""
	Aug 27 23:56:00 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:56:00.681272528Z" level=info msg="StartContainer for \"8db6aabd38edbe1e2f1270112a398c6712fa7e260a2d74fdad6fda93f49d5f89\" returns successfully"
	Aug 27 23:56:00 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:56:00.711676108Z" level=info msg="shim disconnected" id=8db6aabd38edbe1e2f1270112a398c6712fa7e260a2d74fdad6fda93f49d5f89 namespace=k8s.io
	Aug 27 23:56:00 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:56:00.711741616Z" level=warning msg="cleaning up after shim disconnected" id=8db6aabd38edbe1e2f1270112a398c6712fa7e260a2d74fdad6fda93f49d5f89 namespace=k8s.io
	Aug 27 23:56:00 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:56:00.711752398Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 27 23:56:01 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:56:01.418178516Z" level=info msg="RemoveContainer for \"54a945e4158d9e87046508da0d6414a56d789ee13f6761696bb16c16c1551cdd\""
	Aug 27 23:56:01 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:56:01.424010922Z" level=info msg="RemoveContainer for \"54a945e4158d9e87046508da0d6414a56d789ee13f6761696bb16c16c1551cdd\" returns successfully"
	Aug 27 23:58:12 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:58:12.404533149Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 27 23:58:12 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:58:12.425657800Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Aug 27 23:58:12 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:58:12.428613945Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 27 23:58:12 old-k8s-version-394049 containerd[570]: time="2024-08-27T23:58:12.428697209Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [3c0492b681bf18809e9a23ab9a173d2d830618a5a4009118054601e45bfe2d62] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:58310 - 32642 "HINFO IN 5855940867599028503.6996764539415629706. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013232459s
	
	
	==> coredns [ead4d00fa7425edec7434788632e2318593bc3569ef3831b4dc8a50390cfcef7] <==
	I0827 23:53:01.962826       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-27 23:52:31.962455326 +0000 UTC m=+0.020703468) (total time: 30.000264591s):
	Trace[2019727887]: [30.000264591s] [30.000264591s] END
	E0827 23:53:01.963104       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0827 23:53:01.962945       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-27 23:52:31.96236723 +0000 UTC m=+0.020615364) (total time: 30.00037096s):
	Trace[1427131847]: [30.00037096s] [30.00037096s] END
	E0827 23:53:01.963131       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0827 23:53:01.963039       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-08-27 23:52:31.962367714 +0000 UTC m=+0.020615848) (total time: 30.000659479s):
	Trace[939984059]: [30.000659479s] [30.000659479s] END
	E0827 23:53:01.963146       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:50554 - 62528 "HINFO IN 5545612607365499106.1380415884376903068. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021393767s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-394049
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-394049
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d0790207a2867fe8d040a9642b972c86ef680cdf
	                    minikube.k8s.io/name=old-k8s-version-394049
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_27T23_49_29_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 27 Aug 2024 23:49:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-394049
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 27 Aug 2024 23:58:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 27 Aug 2024 23:53:20 +0000   Tue, 27 Aug 2024 23:49:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 27 Aug 2024 23:53:20 +0000   Tue, 27 Aug 2024 23:49:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 27 Aug 2024 23:53:20 +0000   Tue, 27 Aug 2024 23:49:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 27 Aug 2024 23:53:20 +0000   Tue, 27 Aug 2024 23:49:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-394049
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022308Ki
	  pods:               110
	System Info:
	  Machine ID:                 57fb3ac7a1e24512ab0f081c7106269b
	  System UUID:                20597d69-b1ba-4523-bfd1-4ceaa4a3aed5
	  Boot ID:                    e72ce5f2-4965-4285-9cc6-e362a4469d8a
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 coredns-74ff55c5b-fbhfc                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m30s
	  kube-system                 etcd-old-k8s-version-394049                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m37s
	  kube-system                 kindnet-bv2q4                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m30s
	  kube-system                 kube-apiserver-old-k8s-version-394049             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-controller-manager-old-k8s-version-394049    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-proxy-d84wl                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m30s
	  kube-system                 kube-scheduler-old-k8s-version-394049             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 metrics-server-9975d5f86-hrfcg                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m24s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-72r62         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-27b8v               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m56s (x5 over 8m56s)  kubelet     Node old-k8s-version-394049 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m56s (x4 over 8m56s)  kubelet     Node old-k8s-version-394049 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m56s (x4 over 8m56s)  kubelet     Node old-k8s-version-394049 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m37s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m37s                  kubelet     Node old-k8s-version-394049 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s                  kubelet     Node old-k8s-version-394049 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s                  kubelet     Node old-k8s-version-394049 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m37s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m30s                  kubelet     Node old-k8s-version-394049 status is now: NodeReady
	  Normal  Starting                 8m29s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m55s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m55s (x8 over 5m55s)  kubelet     Node old-k8s-version-394049 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x8 over 5m55s)  kubelet     Node old-k8s-version-394049 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x7 over 5m55s)  kubelet     Node old-k8s-version-394049 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m42s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [b840f973e99b93adc44783c2e2d337691055b2010c919612e3dadc0ed1482689] <==
	2024-08-27 23:54:10.946515 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:54:20.946343 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:54:30.946602 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:54:40.946472 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:54:50.946300 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:55:00.946398 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:55:10.946490 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:55:20.946486 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:55:30.946491 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:55:40.946522 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:55:50.946271 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:56:00.946737 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:56:10.946647 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:56:20.946447 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:56:30.946343 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:56:40.946440 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:56:50.946400 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:57:00.946516 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:57:10.946475 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:57:20.946279 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:57:30.946350 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:57:40.946406 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:57:50.946476 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:58:00.946727 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:58:10.946903 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [ec54c116a9331e1e0344c99a787d2410df9e7415035a80a4727091fdd518c6d9] <==
	raft2024/08/27 23:49:20 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/08/27 23:49:20 INFO: ea7e25599daad906 became leader at term 2
	raft2024/08/27 23:49:20 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-08-27 23:49:20.221303 I | etcdserver: published {Name:old-k8s-version-394049 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-08-27 23:49:20.221630 I | embed: ready to serve client requests
	2024-08-27 23:49:20.228447 I | embed: serving client requests on 192.168.76.2:2379
	2024-08-27 23:49:20.229161 I | etcdserver: setting up the initial cluster version to 3.4
	2024-08-27 23:49:20.238788 I | embed: ready to serve client requests
	2024-08-27 23:49:20.240276 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-08-27 23:49:20.240699 I | embed: serving client requests on 127.0.0.1:2379
	2024-08-27 23:49:20.269520 I | etcdserver/api: enabled capabilities for version 3.4
	2024-08-27 23:49:44.260649 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:49:46.939539 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:49:56.939475 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:50:06.939364 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:50:16.939572 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:50:26.939536 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:50:36.939385 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:50:46.939461 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:50:56.939378 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:51:06.939515 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:51:16.939554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:51:26.939445 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:51:36.942138 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-27 23:51:46.939784 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 23:58:14 up  7:40,  0 users,  load average: 1.12, 1.76, 2.32
	Linux old-k8s-version-394049 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [1cbb985b30629df7e58845adea1be58296d1c4b309b10502e97ae37f80e864fd] <==
	I0827 23:56:13.413924       1 main.go:299] handling current node
	I0827 23:56:23.422550       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:56:23.422587       1 main.go:299] handling current node
	I0827 23:56:33.414264       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:56:33.414302       1 main.go:299] handling current node
	I0827 23:56:43.413822       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:56:43.413861       1 main.go:299] handling current node
	I0827 23:56:53.422388       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:56:53.422426       1 main.go:299] handling current node
	I0827 23:57:03.422296       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:57:03.422334       1 main.go:299] handling current node
	I0827 23:57:13.421316       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:57:13.421354       1 main.go:299] handling current node
	I0827 23:57:23.421702       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:57:23.421741       1 main.go:299] handling current node
	I0827 23:57:33.413559       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:57:33.413593       1 main.go:299] handling current node
	I0827 23:57:43.422053       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:57:43.422268       1 main.go:299] handling current node
	I0827 23:57:53.420477       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:57:53.420518       1 main.go:299] handling current node
	I0827 23:58:03.421617       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:58:03.421696       1 main.go:299] handling current node
	I0827 23:58:13.420522       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:58:13.420556       1 main.go:299] handling current node
	
	
	==> kindnet [575a6ee419e7fe10299e33d8b97f8c2598ad91a8fea4bdd2f0dd5e2db16ada9c] <==
	I0827 23:49:48.616042       1 controller.go:374] Syncing nftables rules
	I0827 23:49:58.420931       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:49:58.420997       1 main.go:299] handling current node
	I0827 23:50:08.413762       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:50:08.413794       1 main.go:299] handling current node
	I0827 23:50:18.421243       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:50:18.421279       1 main.go:299] handling current node
	I0827 23:50:28.420557       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:50:28.420593       1 main.go:299] handling current node
	I0827 23:50:38.413777       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:50:38.413815       1 main.go:299] handling current node
	I0827 23:50:48.413560       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:50:48.413626       1 main.go:299] handling current node
	I0827 23:50:58.421307       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:50:58.421346       1 main.go:299] handling current node
	I0827 23:51:08.412728       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:51:08.412816       1 main.go:299] handling current node
	I0827 23:51:18.421429       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:51:18.421463       1 main.go:299] handling current node
	I0827 23:51:28.420902       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:51:28.420935       1 main.go:299] handling current node
	I0827 23:51:38.413184       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:51:38.413227       1 main.go:299] handling current node
	I0827 23:51:48.413310       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0827 23:51:48.413345       1 main.go:299] handling current node
	
	
	==> kube-apiserver [236ee37eeeb99bd1460f867c4e7fe387aa435f0c3062f69ba966a2912dcefd98] <==
	I0827 23:54:54.541332       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0827 23:54:54.541340       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0827 23:55:32.642740       1 handler_proxy.go:102] no RequestInfo found in the context
	E0827 23:55:32.642825       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0827 23:55:32.642841       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0827 23:55:33.324134       1 client.go:360] parsed scheme: "passthrough"
	I0827 23:55:33.324189       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0827 23:55:33.324199       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0827 23:56:07.143691       1 client.go:360] parsed scheme: "passthrough"
	I0827 23:56:07.143912       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0827 23:56:07.144101       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0827 23:56:44.563586       1 client.go:360] parsed scheme: "passthrough"
	I0827 23:56:44.563787       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0827 23:56:44.563807       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0827 23:57:28.912981       1 client.go:360] parsed scheme: "passthrough"
	I0827 23:57:28.913025       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0827 23:57:28.913034       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0827 23:57:31.165910       1 handler_proxy.go:102] no RequestInfo found in the context
	E0827 23:57:31.166143       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0827 23:57:31.166159       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0827 23:58:12.047007       1 client.go:360] parsed scheme: "passthrough"
	I0827 23:58:12.047056       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0827 23:58:12.047065       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [8ad8c60d925d8c127982d6c494b2944705246a4e1f900b216029c075b40579c3] <==
	I0827 23:49:26.708556       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0827 23:49:26.708650       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0827 23:49:27.234048       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0827 23:49:27.287121       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0827 23:49:27.409556       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0827 23:49:27.410673       1 controller.go:606] quota admission added evaluator for: endpoints
	I0827 23:49:27.415419       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0827 23:49:27.743338       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0827 23:49:28.365909       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0827 23:49:28.874379       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0827 23:49:28.938161       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0827 23:49:44.461224       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0827 23:49:44.482699       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0827 23:50:00.535243       1 client.go:360] parsed scheme: "passthrough"
	I0827 23:50:00.535306       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0827 23:50:00.535316       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0827 23:50:33.661932       1 client.go:360] parsed scheme: "passthrough"
	I0827 23:50:33.661981       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0827 23:50:33.661990       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0827 23:51:05.141825       1 client.go:360] parsed scheme: "passthrough"
	I0827 23:51:05.141902       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0827 23:51:05.143276       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0827 23:51:45.881953       1 client.go:360] parsed scheme: "passthrough"
	I0827 23:51:45.882167       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0827 23:51:45.882260       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [30ef2c8817f233bf500df3120c006454ceca974e44a9d5b1ccb0d2f184c7a618] <==
	I0827 23:49:44.493906       1 shared_informer.go:247] Caches are synced for stateful set 
	I0827 23:49:44.507461       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0827 23:49:44.509031       1 shared_informer.go:247] Caches are synced for service account 
	I0827 23:49:44.519339       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0827 23:49:44.519837       1 shared_informer.go:247] Caches are synced for resource quota 
	I0827 23:49:44.526166       1 shared_informer.go:247] Caches are synced for namespace 
	E0827 23:49:44.528012       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0827 23:49:44.549627       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-d84wl"
	I0827 23:49:44.550656       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-fbhfc"
	I0827 23:49:44.561561       1 shared_informer.go:247] Caches are synced for resource quota 
	I0827 23:49:44.570205       1 shared_informer.go:247] Caches are synced for HPA 
	I0827 23:49:44.598080       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-jbl8g"
	I0827 23:49:44.628771       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-bv2q4"
	I0827 23:49:44.713341       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0827 23:49:44.918467       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0827 23:49:44.985638       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0827 23:49:44.985673       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0827 23:49:46.620704       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0827 23:49:46.660901       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-jbl8g"
	I0827 23:49:49.470636       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0827 23:51:49.404839       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0827 23:51:49.436761       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0827 23:51:49.459024       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0827 23:51:49.539072       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0827 23:51:49.608618       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [b80f35939db8aaeb12827ab1e612ae18e95c0c024e797cd5b1ea4629fe4a70ea] <==
	W0827 23:53:53.722526       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0827 23:54:19.758029       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0827 23:54:25.373117       1 request.go:655] Throttling request took 1.048206911s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0827 23:54:26.224755       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0827 23:54:50.259984       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0827 23:54:57.875101       1 request.go:655] Throttling request took 1.048320397s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0827 23:54:58.726692       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0827 23:55:20.762003       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0827 23:55:30.377150       1 request.go:655] Throttling request took 1.047874412s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1?timeout=32s
	W0827 23:55:31.228541       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0827 23:55:51.263818       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0827 23:56:02.878956       1 request.go:655] Throttling request took 1.048466592s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0827 23:56:03.730498       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0827 23:56:21.765861       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0827 23:56:35.380954       1 request.go:655] Throttling request took 1.048324438s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0827 23:56:36.235037       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0827 23:56:52.267934       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0827 23:57:07.885587       1 request.go:655] Throttling request took 1.048295617s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0827 23:57:08.737114       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0827 23:57:22.770104       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0827 23:57:40.387783       1 request.go:655] Throttling request took 1.04851645s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1?timeout=32s
	W0827 23:57:41.241391       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0827 23:57:53.272690       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0827 23:58:12.892712       1 request.go:655] Throttling request took 1.042734796s, request: GET:https://192.168.76.2:8443/apis/batch/v1?timeout=32s
	W0827 23:58:13.744482       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-proxy [afa3d5bad6b52464ebc366db825a3bae7e5c7708a260053326c71f3b698cb205] <==
	I0827 23:49:45.609147       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0827 23:49:45.609313       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0827 23:49:45.633739       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0827 23:49:45.633851       1 server_others.go:185] Using iptables Proxier.
	I0827 23:49:45.634320       1 server.go:650] Version: v1.20.0
	I0827 23:49:45.634940       1 config.go:315] Starting service config controller
	I0827 23:49:45.634949       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0827 23:49:45.634965       1 config.go:224] Starting endpoint slice config controller
	I0827 23:49:45.634968       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0827 23:49:45.735090       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0827 23:49:45.735093       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [b794569e1af8dd0a1e24a3b37ce65bee8173206424ae64ce50ae15299bc2ce1e] <==
	I0827 23:52:32.232553       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0827 23:52:32.232693       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0827 23:52:32.249652       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0827 23:52:32.249934       1 server_others.go:185] Using iptables Proxier.
	I0827 23:52:32.250261       1 server.go:650] Version: v1.20.0
	I0827 23:52:32.250705       1 config.go:315] Starting service config controller
	I0827 23:52:32.250837       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0827 23:52:32.250711       1 config.go:224] Starting endpoint slice config controller
	I0827 23:52:32.251300       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0827 23:52:32.351045       1 shared_informer.go:247] Caches are synced for service config 
	I0827 23:52:32.352239       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [1a994ea8ba82f081ccc9e2ac0e483d50f83e2ed42aa614b79c8caa2103abf267] <==
	I0827 23:52:24.411143       1 serving.go:331] Generated self-signed cert in-memory
	W0827 23:52:29.750237       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0827 23:52:29.750272       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0827 23:52:29.750283       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0827 23:52:29.750300       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0827 23:52:30.229458       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0827 23:52:30.230335       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:52:30.230357       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:52:30.230453       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0827 23:52:30.332685       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [cb5a0544025d9eeba2b0613deeb98000ece1fd8d335ccd8307d6631b0c79b808] <==
	W0827 23:49:25.873747       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0827 23:49:25.873829       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0827 23:49:25.873908       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0827 23:49:25.952735       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0827 23:49:25.961401       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:49:25.961420       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0827 23:49:25.961454       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0827 23:49:25.972839       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0827 23:49:25.973313       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0827 23:49:25.973379       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0827 23:49:25.973438       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0827 23:49:25.973496       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0827 23:49:25.973564       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0827 23:49:25.987859       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 23:49:25.987960       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0827 23:49:26.007252       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0827 23:49:26.007386       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0827 23:49:26.007488       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0827 23:49:26.007879       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0827 23:49:26.809445       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0827 23:49:26.809644       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0827 23:49:26.901755       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0827 23:49:26.980216       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0827 23:49:27.010358       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0827 23:49:29.461505       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Aug 27 23:56:38 old-k8s-version-394049 kubelet[661]: E0827 23:56:38.403581     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 27 23:56:39 old-k8s-version-394049 kubelet[661]: I0827 23:56:39.403254     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8db6aabd38edbe1e2f1270112a398c6712fa7e260a2d74fdad6fda93f49d5f89
	Aug 27 23:56:39 old-k8s-version-394049 kubelet[661]: E0827 23:56:39.403644     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	Aug 27 23:56:50 old-k8s-version-394049 kubelet[661]: E0827 23:56:50.403803     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 27 23:56:54 old-k8s-version-394049 kubelet[661]: I0827 23:56:54.402756     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8db6aabd38edbe1e2f1270112a398c6712fa7e260a2d74fdad6fda93f49d5f89
	Aug 27 23:56:54 old-k8s-version-394049 kubelet[661]: E0827 23:56:54.403100     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	Aug 27 23:57:03 old-k8s-version-394049 kubelet[661]: E0827 23:57:03.403600     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 27 23:57:09 old-k8s-version-394049 kubelet[661]: I0827 23:57:09.403136     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8db6aabd38edbe1e2f1270112a398c6712fa7e260a2d74fdad6fda93f49d5f89
	Aug 27 23:57:09 old-k8s-version-394049 kubelet[661]: E0827 23:57:09.403474     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	Aug 27 23:57:17 old-k8s-version-394049 kubelet[661]: E0827 23:57:17.403665     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 27 23:57:22 old-k8s-version-394049 kubelet[661]: I0827 23:57:22.402831     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8db6aabd38edbe1e2f1270112a398c6712fa7e260a2d74fdad6fda93f49d5f89
	Aug 27 23:57:22 old-k8s-version-394049 kubelet[661]: E0827 23:57:22.403246     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	Aug 27 23:57:30 old-k8s-version-394049 kubelet[661]: E0827 23:57:30.403646     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 27 23:57:36 old-k8s-version-394049 kubelet[661]: I0827 23:57:36.402817     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8db6aabd38edbe1e2f1270112a398c6712fa7e260a2d74fdad6fda93f49d5f89
	Aug 27 23:57:36 old-k8s-version-394049 kubelet[661]: E0827 23:57:36.403163     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	Aug 27 23:57:43 old-k8s-version-394049 kubelet[661]: E0827 23:57:43.403668     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 27 23:57:51 old-k8s-version-394049 kubelet[661]: I0827 23:57:51.402823     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8db6aabd38edbe1e2f1270112a398c6712fa7e260a2d74fdad6fda93f49d5f89
	Aug 27 23:57:51 old-k8s-version-394049 kubelet[661]: E0827 23:57:51.403157     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	Aug 27 23:57:57 old-k8s-version-394049 kubelet[661]: E0827 23:57:57.404181     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 27 23:58:05 old-k8s-version-394049 kubelet[661]: I0827 23:58:05.402798     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 8db6aabd38edbe1e2f1270112a398c6712fa7e260a2d74fdad6fda93f49d5f89
	Aug 27 23:58:05 old-k8s-version-394049 kubelet[661]: E0827 23:58:05.403164     661 pod_workers.go:191] Error syncing pod 833506d0-947f-42a5-b544-093d7ddb1870 ("dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-72r62_kubernetes-dashboard(833506d0-947f-42a5-b544-093d7ddb1870)"
	Aug 27 23:58:12 old-k8s-version-394049 kubelet[661]: E0827 23:58:12.429038     661 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 27 23:58:12 old-k8s-version-394049 kubelet[661]: E0827 23:58:12.429100     661 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 27 23:58:12 old-k8s-version-394049 kubelet[661]: E0827 23:58:12.429302     661 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-nrslw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6
c-c425-42a2-9ece-0b66a9f7a842): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 27 23:58:12 old-k8s-version-394049 kubelet[661]: E0827 23:58:12.429378     661 pod_workers.go:191] Error syncing pod d9d77d6c-c425-42a2-9ece-0b66a9f7a842 ("metrics-server-9975d5f86-hrfcg_kube-system(d9d77d6c-c425-42a2-9ece-0b66a9f7a842)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> kubernetes-dashboard [d6670465175a28a741e1eadfb9ec891d36c454066af259c2ba1292e1c2d606d9] <==
	2024/08/27 23:52:54 Starting overwatch
	2024/08/27 23:52:54 Using namespace: kubernetes-dashboard
	2024/08/27 23:52:54 Using in-cluster config to connect to apiserver
	2024/08/27 23:52:54 Using secret token for csrf signing
	2024/08/27 23:52:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/27 23:52:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/27 23:52:54 Successful initial request to the apiserver, version: v1.20.0
	2024/08/27 23:52:54 Generating JWE encryption key
	2024/08/27 23:52:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/27 23:52:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/27 23:52:54 Initializing JWE encryption key from synchronized object
	2024/08/27 23:52:54 Creating in-cluster Sidecar client
	2024/08/27 23:52:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/27 23:52:54 Serving insecurely on HTTP port: 9090
	2024/08/27 23:53:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/27 23:53:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/27 23:54:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/27 23:54:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/27 23:55:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/27 23:55:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/27 23:56:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/27 23:56:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/27 23:57:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/27 23:57:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [42597d6ccc6c90213fb2f50464c1373d136df2cc9496367789b03fba8d5f25bf] <==
	I0827 23:52:32.511409       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0827 23:52:32.524032       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0827 23:52:32.524833       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0827 23:52:49.962793       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0827 23:52:49.963412       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6b2a3b3-516a-4856-9d3f-d4c8c78b942c", APIVersion:"v1", ResourceVersion:"796", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-394049_8ce7e426-fe00-4a35-9664-c97b3fd81e90 became leader
	I0827 23:52:49.967723       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-394049_8ce7e426-fe00-4a35-9664-c97b3fd81e90!
	I0827 23:52:50.069084       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-394049_8ce7e426-fe00-4a35-9664-c97b3fd81e90!
	
	
	==> storage-provisioner [592dbdd737e878b0fe0ea4cea6b72f6e640f9c434b17d5af3d98a6c70210e42c] <==
	I0827 23:49:47.447539       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0827 23:49:47.473188       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0827 23:49:47.473259       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0827 23:49:47.487311       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0827 23:49:47.487835       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-394049_ab469b8b-8320-4159-9e92-4cbfd295dcf4!
	I0827 23:49:47.492450       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6b2a3b3-516a-4856-9d3f-d4c8c78b942c", APIVersion:"v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-394049_ab469b8b-8320-4159-9e92-4cbfd295dcf4 became leader
	I0827 23:49:47.588440       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-394049_ab469b8b-8320-4159-9e92-4cbfd295dcf4!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-394049 -n old-k8s-version-394049
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-394049 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-hrfcg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-394049 describe pod metrics-server-9975d5f86-hrfcg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-394049 describe pod metrics-server-9975d5f86-hrfcg: exit status 1 (109.045433ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-hrfcg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-394049 describe pod metrics-server-9975d5f86-hrfcg: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (374.10s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.1
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 6.52
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.17
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 216.13
31 TestAddons/serial/GCPAuth/Namespaces 0.19
33 TestAddons/parallel/Registry 16.98
34 TestAddons/parallel/Ingress 20.65
35 TestAddons/parallel/InspektorGadget 12.31
36 TestAddons/parallel/MetricsServer 6.08
39 TestAddons/parallel/CSI 54.78
40 TestAddons/parallel/Headlamp 17.48
41 TestAddons/parallel/CloudSpanner 5.62
42 TestAddons/parallel/LocalPath 8.82
43 TestAddons/parallel/NvidiaDevicePlugin 5.62
44 TestAddons/parallel/Yakd 11.96
45 TestAddons/StoppedEnableDisable 12.31
46 TestCertOptions 37.18
47 TestCertExpiration 232.69
49 TestForceSystemdFlag 44.62
50 TestForceSystemdEnv 41.4
51 TestDockerEnvContainerd 50.1
56 TestErrorSpam/setup 31.95
57 TestErrorSpam/start 0.74
58 TestErrorSpam/status 1.17
59 TestErrorSpam/pause 1.79
60 TestErrorSpam/unpause 1.83
61 TestErrorSpam/stop 1.49
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 50.67
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.12
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.12
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.37
73 TestFunctional/serial/CacheCmd/cache/add_local 1.25
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.13
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.19
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 45.91
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.74
84 TestFunctional/serial/LogsFileCmd 1.73
85 TestFunctional/serial/InvalidService 4.89
87 TestFunctional/parallel/ConfigCmd 0.47
88 TestFunctional/parallel/DashboardCmd 13.26
89 TestFunctional/parallel/DryRun 0.57
90 TestFunctional/parallel/InternationalLanguage 0.28
91 TestFunctional/parallel/StatusCmd 1.11
95 TestFunctional/parallel/ServiceCmdConnect 10.71
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 42.68
99 TestFunctional/parallel/SSHCmd 0.82
100 TestFunctional/parallel/CpCmd 1.7
102 TestFunctional/parallel/FileSync 0.28
103 TestFunctional/parallel/CertSync 2.1
107 TestFunctional/parallel/NodeLabels 0.1
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.73
111 TestFunctional/parallel/License 0.31
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.16
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.25
124 TestFunctional/parallel/ServiceCmd/List 0.59
125 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
126 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
127 TestFunctional/parallel/ServiceCmd/Format 0.38
128 TestFunctional/parallel/ServiceCmd/URL 0.39
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
130 TestFunctional/parallel/ProfileCmd/profile_list 0.4
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
132 TestFunctional/parallel/MountCmd/any-port 6.92
133 TestFunctional/parallel/MountCmd/specific-port 1.2
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.9
135 TestFunctional/parallel/Version/short 0.05
136 TestFunctional/parallel/Version/components 1.2
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.37
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
141 TestFunctional/parallel/ImageCommands/ImageBuild 4.13
142 TestFunctional/parallel/ImageCommands/Setup 0.79
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.19
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.61
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.5
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.63
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.81
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
150 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
151 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
152 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 118.85
160 TestMultiControlPlane/serial/DeployApp 29.13
161 TestMultiControlPlane/serial/PingHostFromPods 1.64
162 TestMultiControlPlane/serial/AddWorkerNode 25.18
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.79
165 TestMultiControlPlane/serial/CopyFile 19.94
166 TestMultiControlPlane/serial/StopSecondaryNode 12.92
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.64
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.74
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.77
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 147.77
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.63
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.57
173 TestMultiControlPlane/serial/StopCluster 36.1
174 TestMultiControlPlane/serial/RestartCluster 64.21
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.6
176 TestMultiControlPlane/serial/AddSecondaryNode 41.96
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
181 TestJSONOutput/start/Command 52.23
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.77
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.7
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.76
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 38.2
207 TestKicCustomNetwork/use_default_bridge_network 36.29
208 TestKicExistingNetwork 33.59
209 TestKicCustomSubnet 35.41
210 TestKicStaticIP 35.96
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 72.4
215 TestMountStart/serial/StartWithMountFirst 9.64
216 TestMountStart/serial/VerifyMountFirst 0.3
217 TestMountStart/serial/StartWithMountSecond 6.53
218 TestMountStart/serial/VerifyMountSecond 0.28
219 TestMountStart/serial/DeleteFirst 1.64
220 TestMountStart/serial/VerifyMountPostDelete 0.27
221 TestMountStart/serial/Stop 1.21
222 TestMountStart/serial/RestartStopped 7.96
223 TestMountStart/serial/VerifyMountPostStop 0.27
226 TestMultiNode/serial/FreshStart2Nodes 69.53
227 TestMultiNode/serial/DeployApp2Nodes 17.59
228 TestMultiNode/serial/PingHostFrom2Pods 1.03
229 TestMultiNode/serial/AddNode 17.02
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.33
232 TestMultiNode/serial/CopyFile 10.42
233 TestMultiNode/serial/StopNode 2.31
234 TestMultiNode/serial/StartAfterStop 9.63
235 TestMultiNode/serial/RestartKeepsNodes 95.98
236 TestMultiNode/serial/DeleteNode 5.61
237 TestMultiNode/serial/StopMultiNode 24.05
238 TestMultiNode/serial/RestartMultiNode 48.59
239 TestMultiNode/serial/ValidateNameConflict 34.09
244 TestPreload 116.23
246 TestScheduledStopUnix 108.84
249 TestInsufficientStorage 10.42
250 TestRunningBinaryUpgrade 82.04
252 TestKubernetesUpgrade 352.35
253 TestMissingContainerUpgrade 180.39
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 40.08
257 TestNoKubernetes/serial/StartWithStopK8s 19.01
258 TestNoKubernetes/serial/Start 9.07
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
260 TestNoKubernetes/serial/ProfileList 0.88
261 TestNoKubernetes/serial/Stop 1.24
262 TestNoKubernetes/serial/StartNoArgs 6.96
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
264 TestStoppedBinaryUpgrade/Setup 0.84
265 TestStoppedBinaryUpgrade/Upgrade 83.88
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.08
275 TestPause/serial/Start 51.76
276 TestPause/serial/SecondStartNoReconfiguration 6.68
277 TestPause/serial/Pause 0.96
278 TestPause/serial/VerifyStatus 0.43
279 TestPause/serial/Unpause 0.78
280 TestPause/serial/PauseAgain 1.15
281 TestPause/serial/DeletePaused 3.24
282 TestPause/serial/VerifyDeletedResources 3.07
290 TestNetworkPlugins/group/false 4.61
295 TestStartStop/group/old-k8s-version/serial/FirstStart 175.77
297 TestStartStop/group/no-preload/serial/FirstStart 79.05
298 TestStartStop/group/old-k8s-version/serial/DeployApp 9.85
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.34
300 TestStartStop/group/old-k8s-version/serial/Stop 12.59
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.25
303 TestStartStop/group/no-preload/serial/DeployApp 10.42
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.28
305 TestStartStop/group/no-preload/serial/Stop 12.12
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 279.17
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
311 TestStartStop/group/no-preload/serial/Pause 3.26
313 TestStartStop/group/embed-certs/serial/FirstStart 67.76
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.06
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
317 TestStartStop/group/old-k8s-version/serial/Pause 3.91
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 66.93
320 TestStartStop/group/embed-certs/serial/DeployApp 9.33
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
322 TestStartStop/group/embed-certs/serial/Stop 12.1
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.4
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
325 TestStartStop/group/embed-certs/serial/SecondStart 297.2
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.82
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.91
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.33
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 268.28
330 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
334 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.12
335 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.14
337 TestStartStop/group/newest-cni/serial/FirstStart 42.05
338 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
339 TestStartStop/group/embed-certs/serial/Pause 3.78
340 TestNetworkPlugins/group/auto/Start 55.06
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.88
343 TestStartStop/group/newest-cni/serial/Stop 1.33
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
345 TestStartStop/group/newest-cni/serial/SecondStart 16.75
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
349 TestStartStop/group/newest-cni/serial/Pause 3.51
350 TestNetworkPlugins/group/auto/KubeletFlags 0.39
351 TestNetworkPlugins/group/auto/NetCatPod 11.35
352 TestNetworkPlugins/group/kindnet/Start 56.71
353 TestNetworkPlugins/group/auto/DNS 0.24
354 TestNetworkPlugins/group/auto/Localhost 0.24
355 TestNetworkPlugins/group/auto/HairPin 0.22
356 TestNetworkPlugins/group/calico/Start 70.8
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
359 TestNetworkPlugins/group/kindnet/NetCatPod 10.33
360 TestNetworkPlugins/group/kindnet/DNS 0.22
361 TestNetworkPlugins/group/kindnet/Localhost 0.2
362 TestNetworkPlugins/group/kindnet/HairPin 0.19
363 TestNetworkPlugins/group/custom-flannel/Start 53.85
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.37
366 TestNetworkPlugins/group/calico/NetCatPod 10.35
367 TestNetworkPlugins/group/calico/DNS 0.26
368 TestNetworkPlugins/group/calico/Localhost 0.22
369 TestNetworkPlugins/group/calico/HairPin 0.24
370 TestNetworkPlugins/group/enable-default-cni/Start 76.3
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.53
373 TestNetworkPlugins/group/custom-flannel/DNS 0.21
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
376 TestNetworkPlugins/group/flannel/Start 51.31
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.44
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
384 TestNetworkPlugins/group/flannel/NetCatPod 11.42
385 TestNetworkPlugins/group/bridge/Start 77.04
386 TestNetworkPlugins/group/flannel/DNS 0.26
387 TestNetworkPlugins/group/flannel/Localhost 0.2
388 TestNetworkPlugins/group/flannel/HairPin 0.19
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
390 TestNetworkPlugins/group/bridge/NetCatPod 10.27
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.14
393 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.20.0/json-events (16.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-040557 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-040557 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (16.103234417s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-040557
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-040557: exit status 85 (65.349096ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-040557 | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC |          |
	|         | -p download-only-040557        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 23:01:31
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 23:01:31.686749 1739720 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:01:31.686916 1739720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:01:31.686925 1739720 out.go:358] Setting ErrFile to fd 2...
	I0827 23:01:31.686930 1739720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:01:31.687187 1739720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
	W0827 23:01:31.687343 1739720 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19522-1734325/.minikube/config/config.json: open /home/jenkins/minikube-integration/19522-1734325/.minikube/config/config.json: no such file or directory
	I0827 23:01:31.687737 1739720 out.go:352] Setting JSON to true
	I0827 23:01:31.689889 1739720 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24241,"bootTime":1724775451,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0827 23:01:31.689972 1739720 start.go:139] virtualization:  
	I0827 23:01:31.692593 1739720 out.go:97] [download-only-040557] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0827 23:01:31.692751 1739720 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/preloaded-tarball: no such file or directory
	I0827 23:01:31.692858 1739720 notify.go:220] Checking for updates...
	I0827 23:01:31.694914 1739720 out.go:169] MINIKUBE_LOCATION=19522
	I0827 23:01:31.696572 1739720 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:01:31.698331 1739720 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:01:31.700413 1739720 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	I0827 23:01:31.702035 1739720 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0827 23:01:31.705307 1739720 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0827 23:01:31.705564 1739720 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:01:31.732499 1739720 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 23:01:31.732595 1739720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:01:31.792586 1739720 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-27 23:01:31.782980727 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:01:31.792703 1739720 docker.go:307] overlay module found
	I0827 23:01:31.794662 1739720 out.go:97] Using the docker driver based on user configuration
	I0827 23:01:31.794689 1739720 start.go:297] selected driver: docker
	I0827 23:01:31.794702 1739720 start.go:901] validating driver "docker" against <nil>
	I0827 23:01:31.794808 1739720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:01:31.849266 1739720 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-27 23:01:31.839033013 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:01:31.849441 1739720 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 23:01:31.849731 1739720 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0827 23:01:31.849896 1739720 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 23:01:31.852033 1739720 out.go:169] Using Docker driver with root privileges
	I0827 23:01:31.854042 1739720 cni.go:84] Creating CNI manager for ""
	I0827 23:01:31.854061 1739720 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0827 23:01:31.854072 1739720 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0827 23:01:31.854158 1739720 start.go:340] cluster config:
	{Name:download-only-040557 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-040557 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:01:31.856151 1739720 out.go:97] Starting "download-only-040557" primary control-plane node in "download-only-040557" cluster
	I0827 23:01:31.856196 1739720 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0827 23:01:31.858192 1739720 out.go:97] Pulling base image v0.0.44-1724667927-19511 ...
	I0827 23:01:31.858216 1739720 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0827 23:01:31.858404 1739720 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local docker daemon
	I0827 23:01:31.873453 1739720 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 to local cache
	I0827 23:01:31.873636 1739720 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory
	I0827 23:01:31.873740 1739720 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 to local cache
	I0827 23:01:31.920014 1739720 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0827 23:01:31.920044 1739720 cache.go:56] Caching tarball of preloaded images
	I0827 23:01:31.920200 1739720 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0827 23:01:31.922716 1739720 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0827 23:01:31.922738 1739720 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0827 23:01:32.021114 1739720 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0827 23:01:38.129692 1739720 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0827 23:01:38.129783 1739720 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0827 23:01:39.358389 1739720 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0827 23:01:39.358788 1739720 profile.go:143] Saving config to /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/download-only-040557/config.json ...
	I0827 23:01:39.358825 1739720 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/download-only-040557/config.json: {Name:mk37156d8957c3b485c5edd8256cef9bf11ed439 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0827 23:01:39.359022 1739720 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0827 23:01:39.360072 1739720 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-040557 host does not exist
	  To start a cluster, run: "minikube start -p download-only-040557"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-040557
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-783356 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-783356 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.516645605s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-783356
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-783356: exit status 85 (66.990464ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-040557 | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC |                     |
	|         | -p download-only-040557        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC | 27 Aug 24 23:01 UTC |
	| delete  | -p download-only-040557        | download-only-040557 | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC | 27 Aug 24 23:01 UTC |
	| start   | -o=json --download-only        | download-only-783356 | jenkins | v1.33.1 | 27 Aug 24 23:01 UTC |                     |
	|         | -p download-only-783356        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/27 23:01:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0827 23:01:48.213114 1739926 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:01:48.213266 1739926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:01:48.213277 1739926 out.go:358] Setting ErrFile to fd 2...
	I0827 23:01:48.213283 1739926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:01:48.213576 1739926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
	I0827 23:01:48.214056 1739926 out.go:352] Setting JSON to true
	I0827 23:01:48.214984 1739926 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":24258,"bootTime":1724775451,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0827 23:01:48.215065 1739926 start.go:139] virtualization:  
	I0827 23:01:48.217640 1739926 out.go:97] [download-only-783356] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0827 23:01:48.217864 1739926 notify.go:220] Checking for updates...
	I0827 23:01:48.219877 1739926 out.go:169] MINIKUBE_LOCATION=19522
	I0827 23:01:48.221831 1739926 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:01:48.223516 1739926 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:01:48.225208 1739926 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	I0827 23:01:48.227295 1739926 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0827 23:01:48.230707 1739926 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0827 23:01:48.231020 1739926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:01:48.259923 1739926 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 23:01:48.260016 1739926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:01:48.317546 1739926 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-27 23:01:48.307679181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:01:48.317658 1739926 docker.go:307] overlay module found
	I0827 23:01:48.319866 1739926 out.go:97] Using the docker driver based on user configuration
	I0827 23:01:48.319899 1739926 start.go:297] selected driver: docker
	I0827 23:01:48.319921 1739926 start.go:901] validating driver "docker" against <nil>
	I0827 23:01:48.320036 1739926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:01:48.373645 1739926 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-27 23:01:48.364506974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:01:48.373812 1739926 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0827 23:01:48.374095 1739926 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0827 23:01:48.374244 1739926 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0827 23:01:48.376272 1739926 out.go:169] Using Docker driver with root privileges
	I0827 23:01:48.378258 1739926 cni.go:84] Creating CNI manager for ""
	I0827 23:01:48.378277 1739926 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0827 23:01:48.378288 1739926 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0827 23:01:48.378364 1739926 start.go:340] cluster config:
	{Name:download-only-783356 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-783356 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:01:48.380071 1739926 out.go:97] Starting "download-only-783356" primary control-plane node in "download-only-783356" cluster
	I0827 23:01:48.380088 1739926 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0827 23:01:48.381733 1739926 out.go:97] Pulling base image v0.0.44-1724667927-19511 ...
	I0827 23:01:48.381757 1739926 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0827 23:01:48.381912 1739926 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local docker daemon
	I0827 23:01:48.397461 1739926 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 to local cache
	I0827 23:01:48.397581 1739926 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory
	I0827 23:01:48.397605 1739926 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 in local cache directory, skipping pull
	I0827 23:01:48.397611 1739926 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 exists in cache, skipping pull
	I0827 23:01:48.397621 1739926 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 as a tarball
	I0827 23:01:48.457498 1739926 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0827 23:01:48.457531 1739926 cache.go:56] Caching tarball of preloaded images
	I0827 23:01:48.457692 1739926 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0827 23:01:48.459665 1739926 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0827 23:01:48.459685 1739926 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0827 23:01:48.546338 1739926 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0827 23:01:52.879195 1739926 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0827 23:01:52.879304 1739926 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19522-1734325/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-783356 host does not exist
	  To start a cluster, run: "minikube start -p download-only-783356"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-783356
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-264953 --alsologtostderr --binary-mirror http://127.0.0.1:43903 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-264953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-264953
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-726754
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-726754: exit status 85 (66.355212ms)

                                                
                                                
-- stdout --
	* Profile "addons-726754" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-726754"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-726754
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-726754: exit status 85 (66.168648ms)

                                                
                                                
-- stdout --
	* Profile "addons-726754" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-726754"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (216.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-726754 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-726754 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m36.126126142s)
--- PASS: TestAddons/Setup (216.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-726754 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-726754 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 5.688245ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-97j86" [007aa8f4-d9b8-4f63-a5b1-8327bb01249d] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004660963s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dk9mv" [2e38d268-dcf1-4880-8d79-499746cf6bfe] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00376111s
addons_test.go:342: (dbg) Run:  kubectl --context addons-726754 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-726754 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-726754 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.940333904s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.98s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-726754 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-726754 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-726754 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9076406f-51a5-4741-857a-07831c1cb62d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9076406f-51a5-4741-857a-07831c1cb62d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003742026s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-726754 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-726754 addons disable ingress-dns --alsologtostderr -v=1: (1.858273897s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-726754 addons disable ingress --alsologtostderr -v=1: (7.937749367s)
--- PASS: TestAddons/parallel/Ingress (20.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-b9kcp" [7f17c02a-303b-4b68-bc92-c8cd0ea2c17d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005466788s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-726754
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-726754: (6.304694556s)
--- PASS: TestAddons/parallel/InspektorGadget (12.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.08s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.822906ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-9zq2h" [68f66677-d6fe-47b3-bcbe-1d75b9c9a49f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00552237s
addons_test.go:417: (dbg) Run:  kubectl --context addons-726754 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.08s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.78s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.887034ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-726754 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/08/27 23:09:28 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-726754 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c540e56e-b0ee-4064-8ba5-e05c938ed2f6] Pending
helpers_test.go:344: "task-pv-pod" [c540e56e-b0ee-4064-8ba5-e05c938ed2f6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c540e56e-b0ee-4064-8ba5-e05c938ed2f6] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004436762s
addons_test.go:590: (dbg) Run:  kubectl --context addons-726754 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-726754 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-726754 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-726754 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-726754 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-726754 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-726754 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [d483deb6-d39f-48cd-8bc3-6201820c908f] Pending
helpers_test.go:344: "task-pv-pod-restore" [d483deb6-d39f-48cd-8bc3-6201820c908f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [d483deb6-d39f-48cd-8bc3-6201820c908f] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003902222s
addons_test.go:632: (dbg) Run:  kubectl --context addons-726754 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-726754 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-726754 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-726754 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.898162762s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.78s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-726754 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-726754 --alsologtostderr -v=1: (1.110786459s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-9pp7m" [feff0d5e-caf6-4eda-bb8d-216fcda1667c] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-9pp7m" [feff0d5e-caf6-4eda-bb8d-216fcda1667c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-9pp7m" [feff0d5e-caf6-4eda-bb8d-216fcda1667c] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004292434s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-726754 addons disable headlamp --alsologtostderr -v=1: (6.365807242s)
--- PASS: TestAddons/parallel/Headlamp (17.48s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-vspvs" [3a80ec68-bfdc-453d-9fb8-05e5f4b90aa2] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.005488094s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-726754
--- PASS: TestAddons/parallel/CloudSpanner (5.62s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-726754 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-726754 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726754 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [782720bb-f899-449d-be73-260c6e265150] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [782720bb-f899-449d-be73-260c6e265150] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [782720bb-f899-449d-be73-260c6e265150] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.0035518s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-726754 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 ssh "cat /opt/local-path-provisioner/pvc-4656418d-3451-4f42-aaea-b26b42c34011_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-726754 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-726754 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.82s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-j96qf" [470c2134-9d9e-48ff-89d0-bf973f12a637] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004650809s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-726754
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.62s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-x8759" [3d42e7c6-9884-4cb9-af38-9176c74eb9f2] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003984106s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-726754 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-726754 addons disable yakd --alsologtostderr -v=1: (5.956513271s)
--- PASS: TestAddons/parallel/Yakd (11.96s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-726754
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-726754: (12.039412879s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-726754
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-726754
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-726754
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (37.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-806650 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-806650 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.429265894s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-806650 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-806650 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-806650 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-806650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-806650
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-806650: (2.054691697s)
--- PASS: TestCertOptions (37.18s)

                                                
                                    
x
+
TestCertExpiration (232.69s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-303453 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-303453 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.897085574s)
E0827 23:48:35.818735 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-303453 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-303453 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.483629587s)
helpers_test.go:175: Cleaning up "cert-expiration-303453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-303453
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-303453: (2.312722528s)
--- PASS: TestCertExpiration (232.69s)

                                                
                                    
x
+
TestForceSystemdFlag (44.62s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-036455 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-036455 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.894478217s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-036455 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-036455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-036455
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-036455: (2.346725388s)
--- PASS: TestForceSystemdFlag (44.62s)

                                                
                                    
x
+
TestForceSystemdEnv (41.4s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-908909 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-908909 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.692089098s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-908909 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-908909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-908909
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-908909: (2.282116425s)
--- PASS: TestForceSystemdEnv (41.40s)

                                                
                                    
x
+
TestDockerEnvContainerd (50.1s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-041545 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-041545 --driver=docker  --container-runtime=containerd: (34.265290214s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-041545"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-041545": (1.026734197s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Vgr8HL2dcQsP/agent.1758312" SSH_AGENT_PID="1758313" DOCKER_HOST=ssh://docker@127.0.0.1:33539 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Vgr8HL2dcQsP/agent.1758312" SSH_AGENT_PID="1758313" DOCKER_HOST=ssh://docker@127.0.0.1:33539 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Vgr8HL2dcQsP/agent.1758312" SSH_AGENT_PID="1758313" DOCKER_HOST=ssh://docker@127.0.0.1:33539 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.279986433s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Vgr8HL2dcQsP/agent.1758312" SSH_AGENT_PID="1758313" DOCKER_HOST=ssh://docker@127.0.0.1:33539 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Vgr8HL2dcQsP/agent.1758312" SSH_AGENT_PID="1758313" DOCKER_HOST=ssh://docker@127.0.0.1:33539 docker image ls": (1.042168382s)
helpers_test.go:175: Cleaning up "dockerenv-041545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-041545
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-041545: (1.968607434s)
--- PASS: TestDockerEnvContainerd (50.10s)

                                                
                                    
x
+
TestErrorSpam/setup (31.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-336734 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-336734 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-336734 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-336734 --driver=docker  --container-runtime=containerd: (31.954518525s)
--- PASS: TestErrorSpam/setup (31.95s)

                                                
                                    
x
+
TestErrorSpam/start (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 start --dry-run
--- PASS: TestErrorSpam/start (0.74s)

                                                
                                    
x
+
TestErrorSpam/status (1.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 status
--- PASS: TestErrorSpam/status (1.17s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (1.49s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 stop: (1.282454973s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-336734 --log_dir /tmp/nospam-336734 stop
--- PASS: TestErrorSpam/stop (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19522-1734325/.minikube/files/etc/test/nested/copy/1739715/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.67s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-572102 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-572102 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (50.664467896s)
--- PASS: TestFunctional/serial/StartWithProxy (50.67s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.12s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-572102 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-572102 --alsologtostderr -v=8: (6.109331435s)
functional_test.go:663: soft start took 6.116217999s for "functional-572102" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.12s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-572102 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-572102 cache add registry.k8s.io/pause:3.1: (1.615801664s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-572102 cache add registry.k8s.io/pause:3.3: (1.416404175s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-572102 cache add registry.k8s.io/pause:latest: (1.341990532s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-572102 /tmp/TestFunctionalserialCacheCmdcacheadd_local4121403633/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 cache add minikube-local-cache-test:functional-572102
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 cache delete minikube-local-cache-test:functional-572102
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-572102
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-572102 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (314.206718ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-572102 cache reload: (1.133968716s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 kubectl -- --context functional-572102 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.19s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-572102 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-572102 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-572102 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.910441989s)
functional_test.go:761: restart took 45.910558006s for "functional-572102" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (45.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-572102 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-572102 logs: (1.7406623s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 logs --file /tmp/TestFunctionalserialLogsFileCmd724169098/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-572102 logs --file /tmp/TestFunctionalserialLogsFileCmd724169098/001/logs.txt: (1.732422807s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.89s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-572102 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-572102
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-572102: exit status 115 (529.360988ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30844 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-572102 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-572102 delete -f testdata/invalidsvc.yaml: (1.095583726s)
--- PASS: TestFunctional/serial/InvalidService (4.89s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-572102 config get cpus: exit status 14 (66.240104ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-572102 config get cpus: exit status 14 (64.256437ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-572102 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-572102 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1773940: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-572102 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-572102 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (259.220948ms)

                                                
                                                
-- stdout --
	* [functional-572102] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:15:15.331712 1773524 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:15:15.331986 1773524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:15:15.331994 1773524 out.go:358] Setting ErrFile to fd 2...
	I0827 23:15:15.332012 1773524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:15:15.332603 1773524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
	I0827 23:15:15.333177 1773524 out.go:352] Setting JSON to false
	I0827 23:15:15.334558 1773524 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":25065,"bootTime":1724775451,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0827 23:15:15.334712 1773524 start.go:139] virtualization:  
	I0827 23:15:15.337214 1773524 out.go:177] * [functional-572102] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0827 23:15:15.339431 1773524 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 23:15:15.339544 1773524 notify.go:220] Checking for updates...
	I0827 23:15:15.343113 1773524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:15:15.344834 1773524 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:15:15.346353 1773524 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	I0827 23:15:15.347964 1773524 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0827 23:15:15.349691 1773524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 23:15:15.352055 1773524 config.go:182] Loaded profile config "functional-572102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:15:15.353156 1773524 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:15:15.401990 1773524 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 23:15:15.402182 1773524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:15:15.492679 1773524 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-27 23:15:15.480328876 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:15:15.492793 1773524 docker.go:307] overlay module found
	I0827 23:15:15.494726 1773524 out.go:177] * Using the docker driver based on existing profile
	I0827 23:15:15.496555 1773524 start.go:297] selected driver: docker
	I0827 23:15:15.496576 1773524 start.go:901] validating driver "docker" against &{Name:functional-572102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-572102 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:15:15.496700 1773524 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 23:15:15.498979 1773524 out.go:201] 
	W0827 23:15:15.501787 1773524 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0827 23:15:15.503419 1773524 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-572102 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-572102 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-572102 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (282.200596ms)

                                                
                                                
-- stdout --
	* [functional-572102] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:15:15.263089 1773507 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:15:15.263328 1773507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:15:15.263360 1773507 out.go:358] Setting ErrFile to fd 2...
	I0827 23:15:15.263380 1773507 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:15:15.263847 1773507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
	I0827 23:15:15.264510 1773507 out.go:352] Setting JSON to false
	I0827 23:15:15.266144 1773507 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":25065,"bootTime":1724775451,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0827 23:15:15.266264 1773507 start.go:139] virtualization:  
	I0827 23:15:15.269415 1773507 out.go:177] * [functional-572102] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0827 23:15:15.272121 1773507 notify.go:220] Checking for updates...
	I0827 23:15:15.274008 1773507 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 23:15:15.275973 1773507 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:15:15.277640 1773507 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:15:15.279264 1773507 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	I0827 23:15:15.281077 1773507 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0827 23:15:15.282898 1773507 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 23:15:15.285212 1773507 config.go:182] Loaded profile config "functional-572102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:15:15.285842 1773507 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:15:15.336832 1773507 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 23:15:15.336949 1773507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:15:15.436724 1773507 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-27 23:15:15.425867386 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:15:15.436835 1773507 docker.go:307] overlay module found
	I0827 23:15:15.439260 1773507 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0827 23:15:15.440855 1773507 start.go:297] selected driver: docker
	I0827 23:15:15.440874 1773507 start.go:901] validating driver "docker" against &{Name:functional-572102 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724667927-19511@sha256:b76289bde084f8cc5aa1f5685cd851c6acc563e6f33ea479e9ba6777b63de760 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-572102 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0827 23:15:15.440980 1773507 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 23:15:15.443438 1773507 out.go:201] 
	W0827 23:15:15.445272 1773507 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0827 23:15:15.446810 1773507 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-572102 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-572102 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-659v9" [ca2fc462-5a77-42b3-920f-b5732afd0cd8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-659v9" [ca2fc462-5a77-42b3-920f-b5732afd0cd8] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003934577s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30898
functional_test.go:1675: http://192.168.49.2:30898: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-659v9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30898
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [38e09e0a-7431-489f-bac3-d1b97ea51848] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004653967s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-572102 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-572102 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-572102 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-572102 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-572102 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fff86ff3-46c6-4fc8-a27d-4db5d510a1be] Pending
helpers_test.go:344: "sp-pod" [fff86ff3-46c6-4fc8-a27d-4db5d510a1be] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fff86ff3-46c6-4fc8-a27d-4db5d510a1be] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004275948s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-572102 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-572102 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-572102 delete -f testdata/storage-provisioner/pod.yaml: (1.815781968s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-572102 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3e53d424-8a35-4a8a-a348-1b4e872b070d] Pending
helpers_test.go:344: "sp-pod" [3e53d424-8a35-4a8a-a348-1b4e872b070d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3e53d424-8a35-4a8a-a348-1b4e872b070d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.006788364s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-572102 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh -n functional-572102 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 cp functional-572102:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2998367247/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh -n functional-572102 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh -n functional-572102 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1739715/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "sudo cat /etc/test/nested/copy/1739715/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1739715.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "sudo cat /etc/ssl/certs/1739715.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1739715.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "sudo cat /usr/share/ca-certificates/1739715.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/17397152.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "sudo cat /etc/ssl/certs/17397152.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/17397152.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "sudo cat /usr/share/ca-certificates/17397152.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-572102 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-572102 ssh "sudo systemctl is-active docker": exit status 1 (340.400215ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-572102 ssh "sudo systemctl is-active crio": exit status 1 (388.580171ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-572102 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-572102 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-572102 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1770069: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-572102 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-572102 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-572102 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7e0c19da-a0b0-4c1a-96ef-ba7b0e3fa1da] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7e0c19da-a0b0-4c1a-96ef-ba7b0e3fa1da] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.007250654s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-572102 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.78.156 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-572102 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-572102 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-572102 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-hd5xz" [45f5fd90-e818-4569-83e9-230322411019] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-hd5xz" [45f5fd90-e818-4569-83e9-230322411019] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.020166876s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 service list -o json
functional_test.go:1494: Took "534.320355ms" to run "out/minikube-linux-arm64 -p functional-572102 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32010
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32010
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "334.898842ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "65.768605ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "339.250298ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "53.79355ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-572102 /tmp/TestFunctionalparallelMountCmdany-port2716497446/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724800504117464723" to /tmp/TestFunctionalparallelMountCmdany-port2716497446/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724800504117464723" to /tmp/TestFunctionalparallelMountCmdany-port2716497446/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724800504117464723" to /tmp/TestFunctionalparallelMountCmdany-port2716497446/001/test-1724800504117464723
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-572102 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (356.209443ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 27 23:15 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 27 23:15 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 27 23:15 test-1724800504117464723
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh cat /mount-9p/test-1724800504117464723
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-572102 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [18f9d154-4838-40ac-88c5-05edfc8a7864] Pending
helpers_test.go:344: "busybox-mount" [18f9d154-4838-40ac-88c5-05edfc8a7864] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [18f9d154-4838-40ac-88c5-05edfc8a7864] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [18f9d154-4838-40ac-88c5-05edfc8a7864] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004649546s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-572102 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-572102 /tmp/TestFunctionalparallelMountCmdany-port2716497446/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-572102 /tmp/TestFunctionalparallelMountCmdspecific-port680731241/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-572102 /tmp/TestFunctionalparallelMountCmdspecific-port680731241/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-572102 ssh "sudo umount -f /mount-9p": exit status 1 (301.673181ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-572102 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-572102 /tmp/TestFunctionalparallelMountCmdspecific-port680731241/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-572102 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3963838328/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-572102 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3963838328/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-572102 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3963838328/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-572102 ssh "findmnt -T" /mount1: exit status 1 (592.999577ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-572102 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-572102 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3963838328/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-572102 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3963838328/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-572102 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3963838328/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 version -o=json --components
2024/08/27 23:15:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-572102 version -o=json --components: (1.204038943s)
--- PASS: TestFunctional/parallel/Version/components (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-572102 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-572102
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-572102
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-572102 image ls --format short --alsologtostderr:
I0827 23:15:28.784060 1775569 out.go:345] Setting OutFile to fd 1 ...
I0827 23:15:28.784277 1775569 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 23:15:28.784304 1775569 out.go:358] Setting ErrFile to fd 2...
I0827 23:15:28.784324 1775569 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 23:15:28.784621 1775569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
I0827 23:15:28.785349 1775569 config.go:182] Loaded profile config "functional-572102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0827 23:15:28.785546 1775569 config.go:182] Loaded profile config "functional-572102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0827 23:15:28.786123 1775569 cli_runner.go:164] Run: docker container inspect functional-572102 --format={{.State.Status}}
I0827 23:15:28.807953 1775569 ssh_runner.go:195] Run: systemctl --version
I0827 23:15:28.808118 1775569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-572102
I0827 23:15:28.833666 1775569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/functional-572102/id_rsa Username:docker}
I0827 23:15:28.948973 1775569 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-572102 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | latest             | sha256:a9dfdb | 67.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kicbase/echo-server               | functional-572102  | sha256:ce2d2c | 2.17MB |
| docker.io/library/minikube-local-cache-test | functional-572102  | sha256:ec7f14 | 991B   |
| docker.io/library/nginx                     | alpine             | sha256:70594c | 19.6MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-572102 image ls --format table --alsologtostderr:
I0827 23:15:29.988119 1775850 out.go:345] Setting OutFile to fd 1 ...
I0827 23:15:29.988255 1775850 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 23:15:29.988265 1775850 out.go:358] Setting ErrFile to fd 2...
I0827 23:15:29.988271 1775850 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 23:15:29.988558 1775850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
I0827 23:15:29.989308 1775850 config.go:182] Loaded profile config "functional-572102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0827 23:15:29.989480 1775850 config.go:182] Loaded profile config "functional-572102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0827 23:15:29.990150 1775850 cli_runner.go:164] Run: docker container inspect functional-572102 --format={{.State.Status}}
I0827 23:15:30.048919 1775850 ssh_runner.go:195] Run: systemctl --version
I0827 23:15:30.048979 1775850 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-572102
I0827 23:15:30.116562 1775850 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/functional-572102/id_rsa Username:docker}
I0827 23:15:30.241715 1775850 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-572102 image ls --format json --alsologtostderr:
[{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-572102"],"size":"2173567"},{"id":"sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19627164"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b66279
1831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:ec7f14b7f6605f3656ee97e62ab683cfe55b9f6a61cf7bd25edb67cdbd458834","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-572102"],"size":"991"},{"id":"sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add"],
"repoTags":["docker.io/library/nginx:latest"],"size":"67690150"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kin
dest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9
be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-572102 image ls --format json --alsologtostderr:
I0827 23:15:29.704529 1775765 out.go:345] Setting OutFile to fd 1 ...
I0827 23:15:29.704779 1775765 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 23:15:29.704808 1775765 out.go:358] Setting ErrFile to fd 2...
I0827 23:15:29.704828 1775765 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 23:15:29.705155 1775765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
I0827 23:15:29.705931 1775765 config.go:182] Loaded profile config "functional-572102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0827 23:15:29.706142 1775765 config.go:182] Loaded profile config "functional-572102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0827 23:15:29.706815 1775765 cli_runner.go:164] Run: docker container inspect functional-572102 --format={{.State.Status}}
I0827 23:15:29.725786 1775765 ssh_runner.go:195] Run: systemctl --version
I0827 23:15:29.725850 1775765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-572102
I0827 23:15:29.744629 1775765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/functional-572102/id_rsa Username:docker}
I0827 23:15:29.841572 1775765 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-572102 image ls --format yaml --alsologtostderr:
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
repoTags:
- docker.io/library/nginx:latest
size: "67690150"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ec7f14b7f6605f3656ee97e62ab683cfe55b9f6a61cf7bd25edb67cdbd458834
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-572102
size: "991"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-572102
size: "2173567"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "19627164"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-572102 image ls --format yaml --alsologtostderr:
I0827 23:15:29.049171 1775650 out.go:345] Setting OutFile to fd 1 ...
I0827 23:15:29.049406 1775650 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 23:15:29.049438 1775650 out.go:358] Setting ErrFile to fd 2...
I0827 23:15:29.049465 1775650 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 23:15:29.049721 1775650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
I0827 23:15:29.050399 1775650 config.go:182] Loaded profile config "functional-572102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0827 23:15:29.050571 1775650 config.go:182] Loaded profile config "functional-572102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0827 23:15:29.051131 1775650 cli_runner.go:164] Run: docker container inspect functional-572102 --format={{.State.Status}}
I0827 23:15:29.071181 1775650 ssh_runner.go:195] Run: systemctl --version
I0827 23:15:29.071241 1775650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-572102
I0827 23:15:29.092495 1775650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/functional-572102/id_rsa Username:docker}
I0827 23:15:29.193752 1775650 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-572102 ssh pgrep buildkitd: exit status 1 (300.279989ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image build -t localhost/my-image:functional-572102 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-572102 image build -t localhost/my-image:functional-572102 testdata/build --alsologtostderr: (3.585274104s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-572102 image build -t localhost/my-image:functional-572102 testdata/build --alsologtostderr:
I0827 23:15:29.599155 1775751 out.go:345] Setting OutFile to fd 1 ...
I0827 23:15:29.600454 1775751 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 23:15:29.600473 1775751 out.go:358] Setting ErrFile to fd 2...
I0827 23:15:29.600479 1775751 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0827 23:15:29.600753 1775751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
I0827 23:15:29.601457 1775751 config.go:182] Loaded profile config "functional-572102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0827 23:15:29.602795 1775751 config.go:182] Loaded profile config "functional-572102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0827 23:15:29.603345 1775751 cli_runner.go:164] Run: docker container inspect functional-572102 --format={{.State.Status}}
I0827 23:15:29.632479 1775751 ssh_runner.go:195] Run: systemctl --version
I0827 23:15:29.632552 1775751 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-572102
I0827 23:15:29.666730 1775751 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33549 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/functional-572102/id_rsa Username:docker}
I0827 23:15:29.773050 1775751 build_images.go:161] Building image from path: /tmp/build.1086989568.tar
I0827 23:15:29.773132 1775751 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0827 23:15:29.783424 1775751 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1086989568.tar
I0827 23:15:29.788119 1775751 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1086989568.tar: stat -c "%s %y" /var/lib/minikube/build/build.1086989568.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1086989568.tar': No such file or directory
I0827 23:15:29.788160 1775751 ssh_runner.go:362] scp /tmp/build.1086989568.tar --> /var/lib/minikube/build/build.1086989568.tar (3072 bytes)
I0827 23:15:29.818105 1775751 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1086989568
I0827 23:15:29.827882 1775751 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1086989568 -xf /var/lib/minikube/build/build.1086989568.tar
I0827 23:15:29.837835 1775751 containerd.go:394] Building image: /var/lib/minikube/build/build.1086989568
I0827 23:15:29.837987 1775751 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1086989568 --local dockerfile=/var/lib/minikube/build/build.1086989568 --output type=image,name=localhost/my-image:functional-572102
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:fe6868b68e79fedf3ddd38959b275535d37f2ac95cdca9160de904e2ae83342a
#8 exporting manifest sha256:fe6868b68e79fedf3ddd38959b275535d37f2ac95cdca9160de904e2ae83342a 0.0s done
#8 exporting config sha256:9a7a422afed20f135c1bf5f0d2cc50179872b9dbd7d99ca7c13c48b407bb96d9 0.0s done
#8 naming to localhost/my-image:functional-572102 done
#8 DONE 0.1s
I0827 23:15:33.093420 1775751 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1086989568 --local dockerfile=/var/lib/minikube/build/build.1086989568 --output type=image,name=localhost/my-image:functional-572102: (3.255383735s)
I0827 23:15:33.093535 1775751 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1086989568
I0827 23:15:33.105313 1775751 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1086989568.tar
I0827 23:15:33.117369 1775751 build_images.go:217] Built localhost/my-image:functional-572102 from /tmp/build.1086989568.tar
I0827 23:15:33.117406 1775751 build_images.go:133] succeeded building to: functional-572102
I0827 23:15:33.117412 1775751 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-572102
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image load --daemon kicbase/echo-server:functional-572102 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image load --daemon kicbase/echo-server:functional-572102 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-572102
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image load --daemon kicbase/echo-server:functional-572102 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-572102 image load --daemon kicbase/echo-server:functional-572102 --alsologtostderr: (1.026433358s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image save kicbase/echo-server:functional-572102 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image rm kicbase/echo-server:functional-572102 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-572102
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 image save --daemon kicbase/echo-server:functional-572102 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-572102
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 update-context --alsologtostderr -v=2
E0827 23:15:32.750144 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:15:32.756894 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:15:32.768252 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:15:32.789718 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:15:32.831157 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:15:32.912571 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:15:33.074153 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-572102 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-572102
E0827 23:15:33.395704 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-572102
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-572102
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (118.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-534764 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0827 23:15:37.881754 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:15:43.004001 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:15:53.245344 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:16:13.726966 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:16:54.688925 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-534764 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m57.953985525s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (118.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (29.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-534764 -- rollout status deployment/busybox: (26.124749456s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-2vspn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-mjnd2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-tgtd4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-2vspn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-mjnd2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-tgtd4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-2vspn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-mjnd2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-tgtd4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (29.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-2vspn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-2vspn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-mjnd2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-mjnd2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-tgtd4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-534764 -- exec busybox-7dff88458-tgtd4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-534764 -v=7 --alsologtostderr
E0827 23:18:16.610558 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-534764 -v=7 --alsologtostderr: (24.056785953s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr: (1.119433043s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-534764 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-534764 status --output json -v=7 --alsologtostderr: (1.01936067s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp testdata/cp-test.txt ha-534764:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2695377109/001/cp-test_ha-534764.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764:/home/docker/cp-test.txt ha-534764-m02:/home/docker/cp-test_ha-534764_ha-534764-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m02 "sudo cat /home/docker/cp-test_ha-534764_ha-534764-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764:/home/docker/cp-test.txt ha-534764-m03:/home/docker/cp-test_ha-534764_ha-534764-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m03 "sudo cat /home/docker/cp-test_ha-534764_ha-534764-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764:/home/docker/cp-test.txt ha-534764-m04:/home/docker/cp-test_ha-534764_ha-534764-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m04 "sudo cat /home/docker/cp-test_ha-534764_ha-534764-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp testdata/cp-test.txt ha-534764-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2695377109/001/cp-test_ha-534764-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764-m02:/home/docker/cp-test.txt ha-534764:/home/docker/cp-test_ha-534764-m02_ha-534764.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764 "sudo cat /home/docker/cp-test_ha-534764-m02_ha-534764.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764-m02:/home/docker/cp-test.txt ha-534764-m03:/home/docker/cp-test_ha-534764-m02_ha-534764-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m03 "sudo cat /home/docker/cp-test_ha-534764-m02_ha-534764-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764-m02:/home/docker/cp-test.txt ha-534764-m04:/home/docker/cp-test_ha-534764-m02_ha-534764-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m04 "sudo cat /home/docker/cp-test_ha-534764-m02_ha-534764-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp testdata/cp-test.txt ha-534764-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2695377109/001/cp-test_ha-534764-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764-m03:/home/docker/cp-test.txt ha-534764:/home/docker/cp-test_ha-534764-m03_ha-534764.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764 "sudo cat /home/docker/cp-test_ha-534764-m03_ha-534764.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764-m03:/home/docker/cp-test.txt ha-534764-m02:/home/docker/cp-test_ha-534764-m03_ha-534764-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m02 "sudo cat /home/docker/cp-test_ha-534764-m03_ha-534764-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764-m03:/home/docker/cp-test.txt ha-534764-m04:/home/docker/cp-test_ha-534764-m03_ha-534764-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m04 "sudo cat /home/docker/cp-test_ha-534764-m03_ha-534764-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp testdata/cp-test.txt ha-534764-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2695377109/001/cp-test_ha-534764-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764-m04:/home/docker/cp-test.txt ha-534764:/home/docker/cp-test_ha-534764-m04_ha-534764.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764 "sudo cat /home/docker/cp-test_ha-534764-m04_ha-534764.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764-m04:/home/docker/cp-test.txt ha-534764-m02:/home/docker/cp-test_ha-534764-m04_ha-534764-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m02 "sudo cat /home/docker/cp-test_ha-534764-m04_ha-534764-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 cp ha-534764-m04:/home/docker/cp-test.txt ha-534764-m03:/home/docker/cp-test_ha-534764-m04_ha-534764-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 ssh -n ha-534764-m03 "sudo cat /home/docker/cp-test_ha-534764-m04_ha-534764-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-534764 node stop m02 -v=7 --alsologtostderr: (12.159740009s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr: exit status 7 (763.171268ms)

                                                
                                                
-- stdout --
	ha-534764
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-534764-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-534764-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-534764-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:19:04.086360 1792162 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:19:04.086480 1792162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:19:04.086488 1792162 out.go:358] Setting ErrFile to fd 2...
	I0827 23:19:04.086493 1792162 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:19:04.086727 1792162 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
	I0827 23:19:04.086965 1792162 out.go:352] Setting JSON to false
	I0827 23:19:04.087003 1792162 mustload.go:65] Loading cluster: ha-534764
	I0827 23:19:04.087105 1792162 notify.go:220] Checking for updates...
	I0827 23:19:04.087521 1792162 config.go:182] Loaded profile config "ha-534764": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:19:04.087538 1792162 status.go:255] checking status of ha-534764 ...
	I0827 23:19:04.088097 1792162 cli_runner.go:164] Run: docker container inspect ha-534764 --format={{.State.Status}}
	I0827 23:19:04.109568 1792162 status.go:330] ha-534764 host status = "Running" (err=<nil>)
	I0827 23:19:04.109596 1792162 host.go:66] Checking if "ha-534764" exists ...
	I0827 23:19:04.109938 1792162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-534764
	I0827 23:19:04.156162 1792162 host.go:66] Checking if "ha-534764" exists ...
	I0827 23:19:04.156562 1792162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:19:04.156635 1792162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-534764
	I0827 23:19:04.175994 1792162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33554 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/ha-534764/id_rsa Username:docker}
	I0827 23:19:04.274044 1792162 ssh_runner.go:195] Run: systemctl --version
	I0827 23:19:04.279047 1792162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:19:04.291127 1792162 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:19:04.352789 1792162 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-27 23:19:04.341871179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:19:04.353377 1792162 kubeconfig.go:125] found "ha-534764" server: "https://192.168.49.254:8443"
	I0827 23:19:04.353417 1792162 api_server.go:166] Checking apiserver status ...
	I0827 23:19:04.353467 1792162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:19:04.365278 1792162 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1454/cgroup
	I0827 23:19:04.374829 1792162 api_server.go:182] apiserver freezer: "9:freezer:/docker/8d2243b5edeb4e52d852c76577bbbc609eef55f0905b2190fa46ef23e6ee9ab4/kubepods/burstable/podf96f57966b235d661b44d39e71e1bc76/0ee2042e6891bcca9d3f489b2618cd4fc30a59333af713b8ce4f41c19b53a12f"
	I0827 23:19:04.374962 1792162 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8d2243b5edeb4e52d852c76577bbbc609eef55f0905b2190fa46ef23e6ee9ab4/kubepods/burstable/podf96f57966b235d661b44d39e71e1bc76/0ee2042e6891bcca9d3f489b2618cd4fc30a59333af713b8ce4f41c19b53a12f/freezer.state
	I0827 23:19:04.389447 1792162 api_server.go:204] freezer state: "THAWED"
	I0827 23:19:04.389481 1792162 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0827 23:19:04.401927 1792162 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0827 23:19:04.401956 1792162 status.go:422] ha-534764 apiserver status = Running (err=<nil>)
	I0827 23:19:04.401967 1792162 status.go:257] ha-534764 status: &{Name:ha-534764 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:19:04.401985 1792162 status.go:255] checking status of ha-534764-m02 ...
	I0827 23:19:04.402323 1792162 cli_runner.go:164] Run: docker container inspect ha-534764-m02 --format={{.State.Status}}
	I0827 23:19:04.420031 1792162 status.go:330] ha-534764-m02 host status = "Stopped" (err=<nil>)
	I0827 23:19:04.420052 1792162 status.go:343] host is not running, skipping remaining checks
	I0827 23:19:04.420060 1792162 status.go:257] ha-534764-m02 status: &{Name:ha-534764-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:19:04.420081 1792162 status.go:255] checking status of ha-534764-m03 ...
	I0827 23:19:04.420530 1792162 cli_runner.go:164] Run: docker container inspect ha-534764-m03 --format={{.State.Status}}
	I0827 23:19:04.438822 1792162 status.go:330] ha-534764-m03 host status = "Running" (err=<nil>)
	I0827 23:19:04.438844 1792162 host.go:66] Checking if "ha-534764-m03" exists ...
	I0827 23:19:04.439288 1792162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-534764-m03
	I0827 23:19:04.457590 1792162 host.go:66] Checking if "ha-534764-m03" exists ...
	I0827 23:19:04.457909 1792162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:19:04.457954 1792162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-534764-m03
	I0827 23:19:04.475436 1792162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33564 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/ha-534764-m03/id_rsa Username:docker}
	I0827 23:19:04.573704 1792162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:19:04.587779 1792162 kubeconfig.go:125] found "ha-534764" server: "https://192.168.49.254:8443"
	I0827 23:19:04.587810 1792162 api_server.go:166] Checking apiserver status ...
	I0827 23:19:04.587852 1792162 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:19:04.602494 1792162 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1369/cgroup
	I0827 23:19:04.612909 1792162 api_server.go:182] apiserver freezer: "9:freezer:/docker/51a93d245a2b846ad8cb42c7bd4a59f21d606e157d9367525d73be8763d24485/kubepods/burstable/pode12a22ee479d00b104e896dc56a18196/db7ef7ed3ff96db77d52b6d08b8b23e355ed501f6e14251c998fc47518ee575a"
	I0827 23:19:04.613023 1792162 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/51a93d245a2b846ad8cb42c7bd4a59f21d606e157d9367525d73be8763d24485/kubepods/burstable/pode12a22ee479d00b104e896dc56a18196/db7ef7ed3ff96db77d52b6d08b8b23e355ed501f6e14251c998fc47518ee575a/freezer.state
	I0827 23:19:04.622935 1792162 api_server.go:204] freezer state: "THAWED"
	I0827 23:19:04.622969 1792162 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0827 23:19:04.630748 1792162 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0827 23:19:04.630777 1792162 status.go:422] ha-534764-m03 apiserver status = Running (err=<nil>)
	I0827 23:19:04.630795 1792162 status.go:257] ha-534764-m03 status: &{Name:ha-534764-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:19:04.630814 1792162 status.go:255] checking status of ha-534764-m04 ...
	I0827 23:19:04.631147 1792162 cli_runner.go:164] Run: docker container inspect ha-534764-m04 --format={{.State.Status}}
	I0827 23:19:04.648044 1792162 status.go:330] ha-534764-m04 host status = "Running" (err=<nil>)
	I0827 23:19:04.648073 1792162 host.go:66] Checking if "ha-534764-m04" exists ...
	I0827 23:19:04.648725 1792162 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-534764-m04
	I0827 23:19:04.666233 1792162 host.go:66] Checking if "ha-534764-m04" exists ...
	I0827 23:19:04.666550 1792162 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:19:04.666606 1792162 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-534764-m04
	I0827 23:19:04.683882 1792162 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33569 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/ha-534764-m04/id_rsa Username:docker}
	I0827 23:19:04.781548 1792162 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:19:04.793385 1792162 status.go:257] ha-534764-m04 status: &{Name:ha-534764-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-534764 node start m02 -v=7 --alsologtostderr: (17.615243282s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr: (1.006380564s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (147.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-534764 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-534764 -v=7 --alsologtostderr
E0827 23:19:32.501818 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:19:32.508555 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:19:32.519964 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:19:32.541375 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:19:32.582829 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:19:32.664236 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:19:32.826059 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:19:33.147471 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:19:33.789470 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:19:35.070984 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:19:37.633868 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:19:42.756106 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:19:52.998386 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-534764 -v=7 --alsologtostderr: (37.484753547s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-534764 --wait=true -v=7 --alsologtostderr
E0827 23:20:13.479708 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:20:32.748944 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:20:54.442180 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:21:00.451900 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-534764 --wait=true -v=7 --alsologtostderr: (1m50.0940927s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-534764
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (147.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-534764 node delete m03 -v=7 --alsologtostderr: (9.666182601s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 stop -v=7 --alsologtostderr
E0827 23:22:16.363775 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-534764 stop -v=7 --alsologtostderr: (35.979142741s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr: exit status 7 (118.446902ms)

                                                
                                                
-- stdout --
	ha-534764
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-534764-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-534764-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:22:39.942324 1806449 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:22:39.942444 1806449 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:22:39.942456 1806449 out.go:358] Setting ErrFile to fd 2...
	I0827 23:22:39.942461 1806449 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:22:39.942706 1806449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
	I0827 23:22:39.942891 1806449 out.go:352] Setting JSON to false
	I0827 23:22:39.942934 1806449 mustload.go:65] Loading cluster: ha-534764
	I0827 23:22:39.943039 1806449 notify.go:220] Checking for updates...
	I0827 23:22:39.943351 1806449 config.go:182] Loaded profile config "ha-534764": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:22:39.943362 1806449 status.go:255] checking status of ha-534764 ...
	I0827 23:22:39.944201 1806449 cli_runner.go:164] Run: docker container inspect ha-534764 --format={{.State.Status}}
	I0827 23:22:39.960314 1806449 status.go:330] ha-534764 host status = "Stopped" (err=<nil>)
	I0827 23:22:39.960334 1806449 status.go:343] host is not running, skipping remaining checks
	I0827 23:22:39.960342 1806449 status.go:257] ha-534764 status: &{Name:ha-534764 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:22:39.960404 1806449 status.go:255] checking status of ha-534764-m02 ...
	I0827 23:22:39.960730 1806449 cli_runner.go:164] Run: docker container inspect ha-534764-m02 --format={{.State.Status}}
	I0827 23:22:39.981901 1806449 status.go:330] ha-534764-m02 host status = "Stopped" (err=<nil>)
	I0827 23:22:39.981925 1806449 status.go:343] host is not running, skipping remaining checks
	I0827 23:22:39.981933 1806449 status.go:257] ha-534764-m02 status: &{Name:ha-534764-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:22:39.981953 1806449 status.go:255] checking status of ha-534764-m04 ...
	I0827 23:22:39.982281 1806449 cli_runner.go:164] Run: docker container inspect ha-534764-m04 --format={{.State.Status}}
	I0827 23:22:40.004594 1806449 status.go:330] ha-534764-m04 host status = "Stopped" (err=<nil>)
	I0827 23:22:40.004619 1806449 status.go:343] host is not running, skipping remaining checks
	I0827 23:22:40.004627 1806449 status.go:257] ha-534764-m04 status: &{Name:ha-534764-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (64.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-534764 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-534764 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m3.216798853s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (64.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-534764 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-534764 --control-plane -v=7 --alsologtostderr: (40.895793169s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-534764 status -v=7 --alsologtostderr: (1.061184805s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestJSONOutput/start/Command (52.23s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-728464 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0827 23:25:00.206563 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-728464 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (52.221584882s)
--- PASS: TestJSONOutput/start/Command (52.23s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-728464 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-728464 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-728464 --output=json --user=testUser
E0827 23:25:32.749845 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-728464 --output=json --user=testUser: (5.755188185s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-302134 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-302134 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (73.19424ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c71d1d1e-498e-42d4-8052-3f0067833145","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-302134] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"19a7dcc7-450b-4a8b-a389-1dbda27cbcb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19522"}}
	{"specversion":"1.0","id":"5a586b04-5ddd-49d3-a705-2262fb5b2ec8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a5728229-6279-4193-bd48-254f79c30c9e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig"}}
	{"specversion":"1.0","id":"4032a065-9027-4ec2-913d-c810799ba78a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube"}}
	{"specversion":"1.0","id":"082620fe-a4b2-4915-85bc-66e1f23db5b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"16a35cf2-bf20-4619-9164-86d9c208991c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"efe01a67-0c82-4b2e-a73a-93e08c873c39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-302134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-302134
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-062573 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-062573 --network=: (36.398229898s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-062573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-062573
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-062573: (1.770825557s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.20s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-465584 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-465584 --network=bridge: (34.246009593s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-465584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-465584
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-465584: (2.011386026s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.29s)

                                                
                                    
x
+
TestKicExistingNetwork (33.59s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-565887 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-565887 --network=existing-network: (31.480873568s)
helpers_test.go:175: Cleaning up "existing-network-565887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-565887
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-565887: (1.958886317s)
--- PASS: TestKicExistingNetwork (33.59s)

                                                
                                    
x
+
TestKicCustomSubnet (35.41s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-993865 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-993865 --subnet=192.168.60.0/24: (33.26786919s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-993865 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-993865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-993865
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-993865: (2.114425093s)
--- PASS: TestKicCustomSubnet (35.41s)

                                                
                                    
x
+
TestKicStaticIP (35.96s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-487099 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-487099 --static-ip=192.168.200.200: (33.727177634s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-487099 ip
helpers_test.go:175: Cleaning up "static-ip-487099" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-487099
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-487099: (2.063703033s)
--- PASS: TestKicStaticIP (35.96s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.4s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-065268 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-065268 --driver=docker  --container-runtime=containerd: (36.334238739s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-067902 --driver=docker  --container-runtime=containerd
E0827 23:29:32.501539 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-067902 --driver=docker  --container-runtime=containerd: (30.450147365s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-065268
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-067902
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-067902" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-067902
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-067902: (2.008993636s)
helpers_test.go:175: Cleaning up "first-065268" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-065268
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-065268: (2.277065927s)
--- PASS: TestMinikubeProfile (72.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-493978 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-493978 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.640845435s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-493978 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.30s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-508255 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-508255 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.530484368s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-508255 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-493978 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-493978 --alsologtostderr -v=5: (1.642946461s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-508255 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-508255
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-508255: (1.211700135s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-508255
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-508255: (6.958795318s)
--- PASS: TestMountStart/serial/RestartStopped (7.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-508255 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-110506 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0827 23:30:32.749520 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-110506 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.979794041s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-110506 -- rollout status deployment/busybox: (15.739025555s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- exec busybox-7dff88458-7tmpv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- exec busybox-7dff88458-vcs9q -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- exec busybox-7dff88458-7tmpv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- exec busybox-7dff88458-vcs9q -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- exec busybox-7dff88458-7tmpv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- exec busybox-7dff88458-vcs9q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.59s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- exec busybox-7dff88458-7tmpv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- exec busybox-7dff88458-7tmpv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- exec busybox-7dff88458-vcs9q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-110506 -- exec busybox-7dff88458-vcs9q -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-110506 -v 3 --alsologtostderr
E0827 23:31:55.813807 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-110506 -v 3 --alsologtostderr: (16.335440309s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.02s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-110506 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 cp testdata/cp-test.txt multinode-110506:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 cp multinode-110506:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3202836362/001/cp-test_multinode-110506.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 cp multinode-110506:/home/docker/cp-test.txt multinode-110506-m02:/home/docker/cp-test_multinode-110506_multinode-110506-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506-m02 "sudo cat /home/docker/cp-test_multinode-110506_multinode-110506-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 cp multinode-110506:/home/docker/cp-test.txt multinode-110506-m03:/home/docker/cp-test_multinode-110506_multinode-110506-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506-m03 "sudo cat /home/docker/cp-test_multinode-110506_multinode-110506-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 cp testdata/cp-test.txt multinode-110506-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 cp multinode-110506-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3202836362/001/cp-test_multinode-110506-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 cp multinode-110506-m02:/home/docker/cp-test.txt multinode-110506:/home/docker/cp-test_multinode-110506-m02_multinode-110506.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506 "sudo cat /home/docker/cp-test_multinode-110506-m02_multinode-110506.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 cp multinode-110506-m02:/home/docker/cp-test.txt multinode-110506-m03:/home/docker/cp-test_multinode-110506-m02_multinode-110506-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506-m03 "sudo cat /home/docker/cp-test_multinode-110506-m02_multinode-110506-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 cp testdata/cp-test.txt multinode-110506-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 cp multinode-110506-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3202836362/001/cp-test_multinode-110506-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 cp multinode-110506-m03:/home/docker/cp-test.txt multinode-110506:/home/docker/cp-test_multinode-110506-m03_multinode-110506.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506 "sudo cat /home/docker/cp-test_multinode-110506-m03_multinode-110506.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 cp multinode-110506-m03:/home/docker/cp-test.txt multinode-110506-m02:/home/docker/cp-test_multinode-110506-m03_multinode-110506-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 ssh -n multinode-110506-m02 "sudo cat /home/docker/cp-test_multinode-110506-m03_multinode-110506-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.42s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-110506 node stop m03: (1.2171277s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-110506 status: exit status 7 (532.40127ms)

                                                
                                                
-- stdout --
	multinode-110506
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-110506-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-110506-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-110506 status --alsologtostderr: exit status 7 (560.590103ms)

                                                
                                                
-- stdout --
	multinode-110506
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-110506-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-110506-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:32:20.042876 1860065 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:32:20.043046 1860065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:32:20.043052 1860065 out.go:358] Setting ErrFile to fd 2...
	I0827 23:32:20.043057 1860065 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:32:20.043438 1860065 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
	I0827 23:32:20.043678 1860065 out.go:352] Setting JSON to false
	I0827 23:32:20.043712 1860065 mustload.go:65] Loading cluster: multinode-110506
	I0827 23:32:20.047833 1860065 config.go:182] Loaded profile config "multinode-110506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:32:20.047870 1860065 status.go:255] checking status of multinode-110506 ...
	I0827 23:32:20.049015 1860065 cli_runner.go:164] Run: docker container inspect multinode-110506 --format={{.State.Status}}
	I0827 23:32:20.049268 1860065 notify.go:220] Checking for updates...
	I0827 23:32:20.072003 1860065 status.go:330] multinode-110506 host status = "Running" (err=<nil>)
	I0827 23:32:20.072028 1860065 host.go:66] Checking if "multinode-110506" exists ...
	I0827 23:32:20.072362 1860065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-110506
	I0827 23:32:20.093317 1860065 host.go:66] Checking if "multinode-110506" exists ...
	I0827 23:32:20.093755 1860065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:32:20.093806 1860065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-110506
	I0827 23:32:20.121544 1860065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33674 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/multinode-110506/id_rsa Username:docker}
	I0827 23:32:20.221766 1860065 ssh_runner.go:195] Run: systemctl --version
	I0827 23:32:20.226580 1860065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:32:20.239605 1860065 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:32:20.296029 1860065 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-27 23:32:20.28567073 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:32:20.296685 1860065 kubeconfig.go:125] found "multinode-110506" server: "https://192.168.67.2:8443"
	I0827 23:32:20.296724 1860065 api_server.go:166] Checking apiserver status ...
	I0827 23:32:20.296771 1860065 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0827 23:32:20.309926 1860065 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	I0827 23:32:20.320265 1860065 api_server.go:182] apiserver freezer: "9:freezer:/docker/e1b7477b29d276e33cbed5c55854f64fe094375d924c387c64b4d3c1314490c5/kubepods/burstable/pod2fcaef2faf37476e2b950f1a3d3c2980/48ae844dfd42f546ef3916ec2ee08040d85649c64f4874eb48474c47afdf51bd"
	I0827 23:32:20.320344 1860065 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e1b7477b29d276e33cbed5c55854f64fe094375d924c387c64b4d3c1314490c5/kubepods/burstable/pod2fcaef2faf37476e2b950f1a3d3c2980/48ae844dfd42f546ef3916ec2ee08040d85649c64f4874eb48474c47afdf51bd/freezer.state
	I0827 23:32:20.330978 1860065 api_server.go:204] freezer state: "THAWED"
	I0827 23:32:20.331008 1860065 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0827 23:32:20.341763 1860065 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0827 23:32:20.341794 1860065 status.go:422] multinode-110506 apiserver status = Running (err=<nil>)
	I0827 23:32:20.341806 1860065 status.go:257] multinode-110506 status: &{Name:multinode-110506 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:32:20.341824 1860065 status.go:255] checking status of multinode-110506-m02 ...
	I0827 23:32:20.342134 1860065 cli_runner.go:164] Run: docker container inspect multinode-110506-m02 --format={{.State.Status}}
	I0827 23:32:20.358071 1860065 status.go:330] multinode-110506-m02 host status = "Running" (err=<nil>)
	I0827 23:32:20.358099 1860065 host.go:66] Checking if "multinode-110506-m02" exists ...
	I0827 23:32:20.358412 1860065 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-110506-m02
	I0827 23:32:20.375287 1860065 host.go:66] Checking if "multinode-110506-m02" exists ...
	I0827 23:32:20.375616 1860065 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0827 23:32:20.375656 1860065 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-110506-m02
	I0827 23:32:20.392768 1860065 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33679 SSHKeyPath:/home/jenkins/minikube-integration/19522-1734325/.minikube/machines/multinode-110506-m02/id_rsa Username:docker}
	I0827 23:32:20.493412 1860065 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0827 23:32:20.505473 1860065 status.go:257] multinode-110506-m02 status: &{Name:multinode-110506-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:32:20.505551 1860065 status.go:255] checking status of multinode-110506-m03 ...
	I0827 23:32:20.505912 1860065 cli_runner.go:164] Run: docker container inspect multinode-110506-m03 --format={{.State.Status}}
	I0827 23:32:20.522875 1860065 status.go:330] multinode-110506-m03 host status = "Stopped" (err=<nil>)
	I0827 23:32:20.522897 1860065 status.go:343] host is not running, skipping remaining checks
	I0827 23:32:20.522905 1860065 status.go:257] multinode-110506-m03 status: &{Name:multinode-110506-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-110506 node start m03 -v=7 --alsologtostderr: (8.768322299s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (95.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-110506
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-110506
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-110506: (25.063284482s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-110506 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-110506 --wait=true -v=8 --alsologtostderr: (1m10.78074869s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-110506
--- PASS: TestMultiNode/serial/RestartKeepsNodes (95.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-110506 node delete m03: (4.91171692s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 stop
E0827 23:34:32.501204 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-110506 stop: (23.867134411s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-110506 status: exit status 7 (93.978958ms)

                                                
                                                
-- stdout --
	multinode-110506
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-110506-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-110506 status --alsologtostderr: exit status 7 (84.721308ms)

                                                
                                                
-- stdout --
	multinode-110506
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-110506-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:34:35.754681 1868524 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:34:35.754879 1868524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:34:35.754909 1868524 out.go:358] Setting ErrFile to fd 2...
	I0827 23:34:35.754931 1868524 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:34:35.755191 1868524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
	I0827 23:34:35.755433 1868524 out.go:352] Setting JSON to false
	I0827 23:34:35.755505 1868524 mustload.go:65] Loading cluster: multinode-110506
	I0827 23:34:35.755570 1868524 notify.go:220] Checking for updates...
	I0827 23:34:35.755995 1868524 config.go:182] Loaded profile config "multinode-110506": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:34:35.756323 1868524 status.go:255] checking status of multinode-110506 ...
	I0827 23:34:35.757106 1868524 cli_runner.go:164] Run: docker container inspect multinode-110506 --format={{.State.Status}}
	I0827 23:34:35.774387 1868524 status.go:330] multinode-110506 host status = "Stopped" (err=<nil>)
	I0827 23:34:35.774407 1868524 status.go:343] host is not running, skipping remaining checks
	I0827 23:34:35.774414 1868524 status.go:257] multinode-110506 status: &{Name:multinode-110506 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0827 23:34:35.774439 1868524 status.go:255] checking status of multinode-110506-m02 ...
	I0827 23:34:35.774768 1868524 cli_runner.go:164] Run: docker container inspect multinode-110506-m02 --format={{.State.Status}}
	I0827 23:34:35.791732 1868524 status.go:330] multinode-110506-m02 host status = "Stopped" (err=<nil>)
	I0827 23:34:35.791751 1868524 status.go:343] host is not running, skipping remaining checks
	I0827 23:34:35.791756 1868524 status.go:257] multinode-110506-m02 status: &{Name:multinode-110506-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.05s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-110506 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-110506 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.900419227s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-110506 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-110506
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-110506-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-110506-m02 --driver=docker  --container-runtime=containerd: exit status 14 (92.934822ms)

                                                
                                                
-- stdout --
	* [multinode-110506-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-110506-m02' is duplicated with machine name 'multinode-110506-m02' in profile 'multinode-110506'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-110506-m03 --driver=docker  --container-runtime=containerd
E0827 23:35:32.749137 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:35:55.567953 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-110506-m03 --driver=docker  --container-runtime=containerd: (31.640009068s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-110506
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-110506: exit status 80 (319.644563ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-110506 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-110506-m03 already exists in multinode-110506-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-110506-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-110506-m03: (1.986087389s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.09s)

                                                
                                    
x
+
TestPreload (116.23s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-064918 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-064918 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m19.110541427s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-064918 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-064918 image pull gcr.io/k8s-minikube/busybox: (2.0097221s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-064918
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-064918: (12.103092649s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-064918 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-064918 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.222689546s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-064918 image list
helpers_test.go:175: Cleaning up "test-preload-064918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-064918
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-064918: (2.429713795s)
--- PASS: TestPreload (116.23s)

                                                
                                    
x
+
TestScheduledStopUnix (108.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-667009 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-667009 --memory=2048 --driver=docker  --container-runtime=containerd: (32.907591235s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-667009 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-667009 -n scheduled-stop-667009
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-667009 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-667009 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-667009 -n scheduled-stop-667009
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-667009
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-667009 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0827 23:39:32.502513 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-667009
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-667009: exit status 7 (78.999437ms)

                                                
                                                
-- stdout --
	scheduled-stop-667009
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-667009 -n scheduled-stop-667009
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-667009 -n scheduled-stop-667009: exit status 7 (63.616226ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-667009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-667009
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-667009: (4.330649781s)
--- PASS: TestScheduledStopUnix (108.84s)

                                                
                                    
x
+
TestInsufficientStorage (10.42s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-558710 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-558710 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.912349071s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"51e66696-93f9-4db1-87cf-228654257926","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-558710] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"785d43fe-7e70-486c-9f1d-605fd94678e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19522"}}
	{"specversion":"1.0","id":"255102b9-861d-4702-ba45-be77565cf048","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1a805b62-037d-4e45-b2fb-995758689fa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig"}}
	{"specversion":"1.0","id":"c6f8495e-4cfb-4d47-9287-3ac9ed0bdf53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube"}}
	{"specversion":"1.0","id":"afe82900-2c0d-4d03-b545-2352a90b3414","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"966a86a9-49a7-4c8d-ae03-4b0501d54237","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"08a8e354-83a4-489d-a79e-552bdad947c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f101ed3d-da03-41b5-8ba4-489698d09033","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e17c266b-7c9c-443f-8814-d383f8646638","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"88e06fb8-de49-4224-b332-89258e197a47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2f9d5c75-d76e-4e9f-a68b-74904115711a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-558710\" primary control-plane node in \"insufficient-storage-558710\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d89abd42-aba1-4e86-ac2d-3e6cc026fca8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724667927-19511 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"54b0ce05-6daa-4b0f-8edf-e9c3013a9103","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e85b2191-f1e9-4e39-8ade-9889b62b12f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-558710 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-558710 --output=json --layout=cluster: exit status 7 (326.164041ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-558710","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-558710","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 23:39:56.252609 1887248 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-558710" does not appear in /home/jenkins/minikube-integration/19522-1734325/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-558710 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-558710 --output=json --layout=cluster: exit status 7 (282.859139ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-558710","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-558710","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0827 23:39:56.534243 1887310 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-558710" does not appear in /home/jenkins/minikube-integration/19522-1734325/kubeconfig
	E0827 23:39:56.544410 1887310 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/insufficient-storage-558710/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-558710" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-558710
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-558710: (1.896897377s)
--- PASS: TestInsufficientStorage (10.42s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.04s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3622058467 start -p running-upgrade-239970 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0827 23:44:32.503740 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3622058467 start -p running-upgrade-239970 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.001234829s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-239970 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0827 23:45:32.749392 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-239970 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (31.68294675s)
helpers_test.go:175: Cleaning up "running-upgrade-239970" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-239970
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-239970: (2.541838253s)
--- PASS: TestRunningBinaryUpgrade (82.04s)

                                                
                                    
x
+
TestKubernetesUpgrade (352.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-295125 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-295125 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.665298183s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-295125
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-295125: (1.399231504s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-295125 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-295125 status --format={{.Host}}: exit status 7 (92.34924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-295125 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-295125 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m38.959148316s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-295125 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-295125 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-295125 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (121.799863ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-295125] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-295125
	    minikube start -p kubernetes-upgrade-295125 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2951252 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-295125 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-295125 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-295125 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.441699613s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-295125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-295125
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-295125: (2.31785774s)
--- PASS: TestKubernetesUpgrade (352.35s)

                                                
                                    
x
+
TestMissingContainerUpgrade (180.39s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2689555264 start -p missing-upgrade-810134 --memory=2200 --driver=docker  --container-runtime=containerd
E0827 23:40:32.749500 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2689555264 start -p missing-upgrade-810134 --memory=2200 --driver=docker  --container-runtime=containerd: (1m43.332287475s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-810134
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-810134: (10.301944974s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-810134
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-810134 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-810134 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m3.644871641s)
helpers_test.go:175: Cleaning up "missing-upgrade-810134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-810134
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-810134: (2.266215983s)
--- PASS: TestMissingContainerUpgrade (180.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-239737 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-239737 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (86.973671ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-239737] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-239737 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-239737 --driver=docker  --container-runtime=containerd: (39.356215095s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-239737 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-239737 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-239737 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.795814014s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-239737 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-239737 status -o json: exit status 2 (292.253601ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-239737","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-239737
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-239737: (1.924525218s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-239737 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-239737 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.069280925s)
--- PASS: TestNoKubernetes/serial/Start (9.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-239737 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-239737 "sudo systemctl is-active --quiet service kubelet": exit status 1 (258.144435ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-239737
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-239737: (1.236196339s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-239737 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-239737 --driver=docker  --container-runtime=containerd: (6.958287976s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-239737 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-239737 "sudo systemctl is-active --quiet service kubelet": exit status 1 (347.019513ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3276794009 start -p stopped-upgrade-537814 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3276794009 start -p stopped-upgrade-537814 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.29837662s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3276794009 -p stopped-upgrade-537814 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3276794009 -p stopped-upgrade-537814 stop: (1.438770111s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-537814 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-537814 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.146093869s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-537814
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-537814: (1.075828656s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.08s)

                                                
                                    
x
+
TestPause/serial/Start (51.76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-539881 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-539881 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (51.760329193s)
--- PASS: TestPause/serial/Start (51.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.68s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-539881 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-539881 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.668802803s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.68s)

                                                
                                    
x
+
TestPause/serial/Pause (0.96s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-539881 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.96s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-539881 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-539881 --output=json --layout=cluster: exit status 2 (431.340235ms)

                                                
                                                
-- stdout --
	{"Name":"pause-539881","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-539881","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-539881 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.15s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-539881 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-539881 --alsologtostderr -v=5: (1.154411954s)
--- PASS: TestPause/serial/PauseAgain (1.15s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.24s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-539881 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-539881 --alsologtostderr -v=5: (3.239057643s)
--- PASS: TestPause/serial/DeletePaused (3.24s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.07s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (2.970895672s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-539881
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-539881: exit status 1 (32.54103ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-539881: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (3.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-174115 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-174115 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (286.887156ms)

                                                
                                                
-- stdout --
	* [false-174115] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19522
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0827 23:47:14.833724 1927256 out.go:345] Setting OutFile to fd 1 ...
	I0827 23:47:14.833952 1927256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:47:14.833980 1927256 out.go:358] Setting ErrFile to fd 2...
	I0827 23:47:14.834003 1927256 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0827 23:47:14.834269 1927256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19522-1734325/.minikube/bin
	I0827 23:47:14.834740 1927256 out.go:352] Setting JSON to false
	I0827 23:47:14.836529 1927256 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":26984,"bootTime":1724775451,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0827 23:47:14.836633 1927256 start.go:139] virtualization:  
	I0827 23:47:14.839758 1927256 out.go:177] * [false-174115] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0827 23:47:14.841373 1927256 out.go:177]   - MINIKUBE_LOCATION=19522
	I0827 23:47:14.841437 1927256 notify.go:220] Checking for updates...
	I0827 23:47:14.844764 1927256 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0827 23:47:14.846415 1927256 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19522-1734325/kubeconfig
	I0827 23:47:14.848209 1927256 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19522-1734325/.minikube
	I0827 23:47:14.849828 1927256 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0827 23:47:14.851439 1927256 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0827 23:47:14.853753 1927256 config.go:182] Loaded profile config "force-systemd-flag-036455": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0827 23:47:14.853858 1927256 driver.go:392] Setting default libvirt URI to qemu:///system
	I0827 23:47:14.888717 1927256 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0827 23:47:14.888823 1927256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0827 23:47:15.030070 1927256 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-27 23:47:15.005857944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214843392 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0827 23:47:15.030204 1927256 docker.go:307] overlay module found
	I0827 23:47:15.038048 1927256 out.go:177] * Using the docker driver based on user configuration
	I0827 23:47:15.039958 1927256 start.go:297] selected driver: docker
	I0827 23:47:15.039981 1927256 start.go:901] validating driver "docker" against <nil>
	I0827 23:47:15.040002 1927256 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0827 23:47:15.042873 1927256 out.go:201] 
	W0827 23:47:15.044675 1927256 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0827 23:47:15.046397 1927256 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-174115 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-174115

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-174115

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-174115

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-174115

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-174115

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-174115

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-174115

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-174115

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-174115

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-174115

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-174115

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-174115" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-174115" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-174115

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-174115"

                                                
                                                
----------------------- debugLogs end: false-174115 [took: 4.135675655s] --------------------------------
helpers_test.go:175: Cleaning up "false-174115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-174115
--- PASS: TestNetworkPlugins/group/false (4.61s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (175.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-394049 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0827 23:49:32.501376 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:50:32.749100 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-394049 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m55.770241053s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (175.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (79.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-710826 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-710826 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m19.047694753s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (79.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-394049 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [26e42416-9155-4d24-9b34-14b16ca9fca0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [26e42416-9155-4d24-9b34-14b16ca9fca0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00349074s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-394049 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-394049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-394049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.183652421s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-394049 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-394049 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-394049 --alsologtostderr -v=3: (12.592878081s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-394049 -n old-k8s-version-394049
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-394049 -n old-k8s-version-394049: exit status 7 (113.094443ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-394049 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-710826 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [183b5a85-38b0-4eab-a521-1a46df6fbd47] Pending
helpers_test.go:344: "busybox" [183b5a85-38b0-4eab-a521-1a46df6fbd47] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [183b5a85-38b0-4eab-a521-1a46df6fbd47] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00431748s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-710826 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-710826 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-710826 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.152308921s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-710826 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-710826 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-710826 --alsologtostderr -v=3: (12.124702311s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-710826 -n no-preload-710826
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-710826 -n no-preload-710826: exit status 7 (75.851904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-710826 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (279.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-710826 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0827 23:54:32.501689 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0827 23:55:32.749546 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-710826 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m38.8032423s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-710826 -n no-preload-710826
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (279.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t5tg6" [3bfe3980-8ee7-4cdf-82db-6e603893e994] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004854762s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t5tg6" [3bfe3980-8ee7-4cdf-82db-6e603893e994] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004003175s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-710826 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-710826 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-710826 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-710826 -n no-preload-710826
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-710826 -n no-preload-710826: exit status 2 (348.139683ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-710826 -n no-preload-710826
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-710826 -n no-preload-710826: exit status 2 (318.758261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-710826 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-710826 -n no-preload-710826
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-710826 -n no-preload-710826
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-550752 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-550752 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m7.76306477s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-27b8v" [fb250caa-4690-4860-9345-76e790717916] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.058903377s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-27b8v" [fb250caa-4690-4860-9345-76e790717916] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004730164s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-394049 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-394049 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-394049 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-394049 --alsologtostderr -v=1: (1.167399211s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-394049 -n old-k8s-version-394049
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-394049 -n old-k8s-version-394049: exit status 2 (423.539967ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-394049 -n old-k8s-version-394049
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-394049 -n old-k8s-version-394049: exit status 2 (447.107773ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-394049 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-394049 -n old-k8s-version-394049
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-394049 -n old-k8s-version-394049
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-714625 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-714625 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m6.933784609s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (66.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-550752 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a9345d07-5443-4347-8c74-5e6d582fefed] Pending
helpers_test.go:344: "busybox" [a9345d07-5443-4347-8c74-5e6d582fefed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a9345d07-5443-4347-8c74-5e6d582fefed] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004164689s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-550752 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-550752 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0827 23:59:32.501527 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-550752 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028269192s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-550752 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-550752 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-550752 --alsologtostderr -v=3: (12.103369685s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-714625 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [322fd6d6-8e76-493b-8bc8-c149a118903f] Pending
helpers_test.go:344: "busybox" [322fd6d6-8e76-493b-8bc8-c149a118903f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [322fd6d6-8e76-493b-8bc8-c149a118903f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00438889s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-714625 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-550752 -n embed-certs-550752
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-550752 -n embed-certs-550752: exit status 7 (151.86264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-550752 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-550752 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-550752 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m56.806145743s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-550752 -n embed-certs-550752
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-714625 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-714625 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.620717127s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-714625 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-714625 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-714625 --alsologtostderr -v=3: (12.910971601s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-714625 -n default-k8s-diff-port-714625
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-714625 -n default-k8s-diff-port-714625: exit status 7 (181.038307ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-714625 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-714625 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0828 00:00:32.749459 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:39.117868 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:39.124465 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:39.136005 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:39.157484 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:39.198983 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:39.280509 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:39.442602 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:39.764120 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:40.405791 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:41.687318 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:44.248561 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:49.370665 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:01:59.612786 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:20.094399 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:53.874397 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:53.880761 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:53.892140 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:53.913594 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:53.955071 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:54.036552 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:54.198027 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:54.520168 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:55.162095 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:56.443522 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:02:59.015780 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:03:01.055902 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:03:04.137985 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:03:14.379317 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:03:34.861003 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:04:15.822372 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:04:22.978198 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:04:32.502130 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-714625 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m27.840694677s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-714625 -n default-k8s-diff-port-714625
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (268.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pmft6" [2df45532-4c84-45e1-a755-84275e27bf76] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005116147s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-pmft6" [2df45532-4c84-45e1-a755-84275e27bf76] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004586946s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-714625 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fmgv8" [476e3f8e-9284-49ef-9824-2fa3cc534071] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003626471s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-714625 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-714625 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-714625 -n default-k8s-diff-port-714625
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-714625 -n default-k8s-diff-port-714625: exit status 2 (325.911286ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-714625 -n default-k8s-diff-port-714625
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-714625 -n default-k8s-diff-port-714625: exit status 2 (337.277782ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-714625 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-714625 -n default-k8s-diff-port-714625
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-714625 -n default-k8s-diff-port-714625
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fmgv8" [476e3f8e-9284-49ef-9824-2fa3cc534071] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005351151s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-550752 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-431306 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-431306 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (42.047212383s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-550752 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.78s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-550752 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-550752 --alsologtostderr -v=1: (1.014330432s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-550752 -n embed-certs-550752
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-550752 -n embed-certs-550752: exit status 2 (391.550846ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-550752 -n embed-certs-550752
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-550752 -n embed-certs-550752: exit status 2 (396.169272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-550752 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-550752 --alsologtostderr -v=1: (1.018033613s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-550752 -n embed-certs-550752
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-550752 -n embed-certs-550752
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (55.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0828 00:05:15.820637 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:05:32.749453 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/addons-726754/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (55.059272281s)
--- PASS: TestNetworkPlugins/group/auto/Start (55.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-431306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-431306 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.879129882s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-431306 --alsologtostderr -v=3
E0828 00:05:37.744516 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-431306 --alsologtostderr -v=3: (1.332638041s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-431306 -n newest-cni-431306
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-431306 -n newest-cni-431306: exit status 7 (87.365436ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-431306 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-431306 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-431306 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (16.331613055s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-431306 -n newest-cni-431306
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-431306 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-431306 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-431306 -n newest-cni-431306
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-431306 -n newest-cni-431306: exit status 2 (324.175618ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-431306 -n newest-cni-431306
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-431306 -n newest-cni-431306: exit status 2 (405.853335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-431306 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-431306 -n newest-cni-431306
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-431306 -n newest-cni-431306
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.51s)
E0828 00:10:58.610960 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:10:58.617377 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:10:58.628816 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:10:58.650214 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:10:58.691678 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:10:58.773192 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:10:58.934622 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:10:59.256550 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:10:59.898638 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:11:01.180053 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:11:03.742167 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:11:05.648923 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:11:08.863685 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:11:19.105773 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-174115 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-174115 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p9l5f" [bb40ccbd-38ff-43d6-b68e-e790a500a25f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p9l5f" [bb40ccbd-38ff-43d6-b68e-e790a500a25f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004067126s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (56.713491692s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-174115 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0828 00:06:39.117822 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m10.79532097s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-djfvr" [c2eb2a26-247b-4c46-98af-cfc43de2ea20] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004471984s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-174115 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-174115 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ztmk8" [2c365142-1d68-4d45-8d5e-b4ed3cb8e107] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0828 00:07:06.820043 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-ztmk8" [2c365142-1d68-4d45-8d5e-b4ed3cb8e107] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003943181s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-174115 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (53.852133266s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-8pjmc" [df7d69df-fd70-44da-9394-204ea3e227fb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005885171s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-174115 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-174115 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l7g55" [d8b0e47b-3e99-4381-960c-7f5013790828] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0828 00:07:53.874120 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/no-preload-710826/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-l7g55" [d8b0e47b-3e99-4381-960c-7f5013790828] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.005986222s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-174115 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m16.300383183s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-174115 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-174115 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6jcdm" [1a51315b-c2cc-4bd2-8932-e3a386180f9d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6jcdm" [1a51315b-c2cc-4bd2-8932-e3a386180f9d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003874642s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-174115 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (51.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0828 00:09:15.570860 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:09:32.501703 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/functional-572102/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:09:43.709789 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:09:43.716283 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:09:43.728334 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:09:43.749790 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:09:43.791951 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:09:43.873402 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:09:44.035214 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:09:44.356898 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:09:44.998760 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (51.30750053s)
--- PASS: TestNetworkPlugins/group/flannel/Start (51.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-174115 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-174115 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-w57mc" [4a2836b0-9f54-4069-9c80-a3767c9dcd39] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0828 00:09:46.280481 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
E0828 00:09:48.842559 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-w57mc" [4a2836b0-9f54-4069-9c80-a3767c9dcd39] Running
E0828 00:09:53.964135 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004174792s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-174115 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bbk46" [73e1b777-e9e8-4d46-aad3-2096b9e2856e] Running
E0828 00:10:04.205660 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/default-k8s-diff-port-714625/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005063931s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-174115 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-174115 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-pvt2g" [7c0a5f9b-2f6e-4a64-bb64-ff491bc92f8b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-pvt2g" [7c0a5f9b-2f6e-4a64-bb64-ff491bc92f8b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.029517391s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-174115 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m17.043598135s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-174115 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-174115 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-174115 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d7b6q" [37cca586-a028-432c-817c-2702a7c57a51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0828 00:11:39.118569 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/old-k8s-version-394049/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-d7b6q" [37cca586-a028-432c-817c-2702a7c57a51] Running
E0828 00:11:39.587985 1739715 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19522-1734325/.minikube/profiles/auto-174115/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004606685s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-174115 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-174115 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-558946 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-558946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-558946
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-887458" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-887458
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-174115 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-174115

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-174115

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-174115

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-174115

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-174115

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-174115

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-174115

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-174115

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-174115

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-174115

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-174115

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-174115" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-174115" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-174115

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-174115"

                                                
                                                
----------------------- debugLogs end: kubenet-174115 [took: 4.140566862s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-174115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-174115
--- SKIP: TestNetworkPlugins/group/kubenet (4.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-174115 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-174115" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-174115

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-174115" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-174115"

                                                
                                                
----------------------- debugLogs end: cilium-174115 [took: 4.698207807s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-174115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-174115
--- SKIP: TestNetworkPlugins/group/cilium (4.88s)

                                                
                                    
Copied to clipboard