Test Report: Docker_Linux_containerd_arm64 19450

                    
                      8d898ab9c8ea504736c6a6ac30beb8b93591f909:2024-08-15:35798
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 199.79
302 TestStartStop/group/old-k8s-version/serial/SecondStart 380.28
x
+
TestAddons/serial/Volcano (199.79s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 43.777601ms
addons_test.go:905: volcano-admission stabilized in 44.132825ms
addons_test.go:897: volcano-scheduler stabilized in 44.202232ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-x5rms" [84bb2074-1672-4c03-adbd-24e442f1345f] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004010773s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-5v8qt" [73395e7c-2504-4221-9d8f-99f64a9cac1b] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004050537s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-vjr4m" [af9f21b6-eb48-4450-b2c4-50a16c69ab40] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004143166s
addons_test.go:932: (dbg) Run:  kubectl --context addons-773218 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-773218 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-773218 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c007b14c-f1d8-4a4c-89fb-a81ddd2c2db1] Pending
helpers_test.go:344: "test-job-nginx-0" [c007b14c-f1d8-4a4c-89fb-a81ddd2c2db1] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-773218 -n addons-773218
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-15 17:12:17.643992625 +0000 UTC m=+432.438562429
addons_test.go:964: (dbg) Run:  kubectl --context addons-773218 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-773218 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-88530484-4a7b-43d2-a99d-5e4f7eacc78d
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gfmhk (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-gfmhk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m58s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-773218 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-773218 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-773218
helpers_test.go:235: (dbg) docker inspect addons-773218:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c6c0d1582b0611dea84548360086b437d56fc94a64a07e3ad3b6295b76420f0d",
	        "Created": "2024-08-15T17:05:50.022486665Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299394,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-15T17:05:50.173726858Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2b339a1cac4376103734d3066f7ccdf0ac7377a2f8f8d5eb9e81c29f3abcec50",
	        "ResolvConfPath": "/var/lib/docker/containers/c6c0d1582b0611dea84548360086b437d56fc94a64a07e3ad3b6295b76420f0d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c6c0d1582b0611dea84548360086b437d56fc94a64a07e3ad3b6295b76420f0d/hostname",
	        "HostsPath": "/var/lib/docker/containers/c6c0d1582b0611dea84548360086b437d56fc94a64a07e3ad3b6295b76420f0d/hosts",
	        "LogPath": "/var/lib/docker/containers/c6c0d1582b0611dea84548360086b437d56fc94a64a07e3ad3b6295b76420f0d/c6c0d1582b0611dea84548360086b437d56fc94a64a07e3ad3b6295b76420f0d-json.log",
	        "Name": "/addons-773218",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-773218:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-773218",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/09923a73e002e4c6a573dc138a393bc67df1e14f74fd2010e356d6219f438eed-init/diff:/var/lib/docker/overlay2/a163b16fa32e47fd7ab2fe98717ea5e008831d97c60d714c2328532bf1d6d774/diff",
	                "MergedDir": "/var/lib/docker/overlay2/09923a73e002e4c6a573dc138a393bc67df1e14f74fd2010e356d6219f438eed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/09923a73e002e4c6a573dc138a393bc67df1e14f74fd2010e356d6219f438eed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/09923a73e002e4c6a573dc138a393bc67df1e14f74fd2010e356d6219f438eed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-773218",
	                "Source": "/var/lib/docker/volumes/addons-773218/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-773218",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-773218",
	                "name.minikube.sigs.k8s.io": "addons-773218",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "483dde9256f713e626c87838a17742d2fbf7baa58803ff77a289f81f30f56552",
	            "SandboxKey": "/var/run/docker/netns/483dde9256f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-773218": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "cfe92218cd324d48194bd1c2ca30a7b1e95e9358ec0c60741886ef461a40721d",
	                    "EndpointID": "be3001c34c5ef77b7b880c447acf2ed5437e39d7a18d3acbf707c7f1d25d6c57",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-773218",
	                        "c6c0d1582b06"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-773218 -n addons-773218
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-773218 logs -n 25: (1.582846333s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-549752   | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | -p download-only-549752              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| delete  | -p download-only-549752              | download-only-549752   | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| start   | -o=json --download-only              | download-only-473657   | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | -p download-only-473657              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| delete  | -p download-only-473657              | download-only-473657   | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| delete  | -p download-only-549752              | download-only-549752   | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| delete  | -p download-only-473657              | download-only-473657   | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| start   | --download-only -p                   | download-docker-863236 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | download-docker-863236               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-863236            | download-docker-863236 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| start   | --download-only -p                   | binary-mirror-748422   | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | binary-mirror-748422                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41401               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-748422              | binary-mirror-748422   | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| addons  | disable dashboard -p                 | addons-773218          | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | addons-773218                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-773218          | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | addons-773218                        |                        |         |         |                     |                     |
	| start   | -p addons-773218 --wait=true         | addons-773218          | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:09 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:05:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:05:24.861939  298896 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:24.862126  298896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:24.862154  298896 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:24.862173  298896 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:24.862425  298896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
	I0815 17:05:24.862903  298896 out.go:352] Setting JSON to false
	I0815 17:05:24.863772  298896 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6468,"bootTime":1723735057,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 17:05:24.863872  298896 start.go:139] virtualization:  
	I0815 17:05:24.866748  298896 out.go:177] * [addons-773218] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 17:05:24.868910  298896 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:05:24.869005  298896 notify.go:220] Checking for updates...
	I0815 17:05:24.873005  298896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:24.874737  298896 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	I0815 17:05:24.876522  298896 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	I0815 17:05:24.878153  298896 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 17:05:24.880195  298896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:05:24.882223  298896 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:24.905706  298896 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:05:24.905821  298896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:05:24.975246  298896 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 17:05:24.965981965 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:05:24.975359  298896 docker.go:307] overlay module found
	I0815 17:05:24.978745  298896 out.go:177] * Using the docker driver based on user configuration
	I0815 17:05:24.980658  298896 start.go:297] selected driver: docker
	I0815 17:05:24.980674  298896 start.go:901] validating driver "docker" against <nil>
	I0815 17:05:24.980688  298896 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:05:24.981449  298896 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:05:25.036818  298896 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 17:05:25.02614302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:05:25.036994  298896 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:05:25.037323  298896 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:05:25.039383  298896 out.go:177] * Using Docker driver with root privileges
	I0815 17:05:25.041206  298896 cni.go:84] Creating CNI manager for ""
	I0815 17:05:25.041242  298896 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 17:05:25.041270  298896 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 17:05:25.041369  298896 start.go:340] cluster config:
	{Name:addons-773218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-773218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:25.043208  298896 out.go:177] * Starting "addons-773218" primary control-plane node in "addons-773218" cluster
	I0815 17:05:25.044714  298896 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0815 17:05:25.046288  298896 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:05:25.048099  298896 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 17:05:25.048262  298896 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 17:05:25.048305  298896 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0815 17:05:25.048318  298896 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:25.048387  298896 preload.go:172] Found /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:05:25.048402  298896 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0815 17:05:25.048734  298896 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/config.json ...
	I0815 17:05:25.048762  298896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/config.json: {Name:mkc06e8060d65d6f47be2181d736104aa101d285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:25.063112  298896 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:05:25.063239  298896 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:05:25.063258  298896 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 17:05:25.063263  298896 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 17:05:25.063270  298896 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:05:25.063276  298896 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 17:05:41.801620  298896 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 17:05:41.801663  298896 cache.go:194] Successfully downloaded all kic artifacts
	I0815 17:05:41.801705  298896 start.go:360] acquireMachinesLock for addons-773218: {Name:mk61663d58e0bc4f2a7b0bd4a6f50b5c7d0a99b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:05:41.801823  298896 start.go:364] duration metric: took 96.271µs to acquireMachinesLock for "addons-773218"
	I0815 17:05:41.801854  298896 start.go:93] Provisioning new machine with config: &{Name:addons-773218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-773218 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0815 17:05:41.801945  298896 start.go:125] createHost starting for "" (driver="docker")
	I0815 17:05:41.804218  298896 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0815 17:05:41.804487  298896 start.go:159] libmachine.API.Create for "addons-773218" (driver="docker")
	I0815 17:05:41.804523  298896 client.go:168] LocalClient.Create starting
	I0815 17:05:41.804634  298896 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem
	I0815 17:05:43.311384  298896 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/cert.pem
	I0815 17:05:43.522024  298896 cli_runner.go:164] Run: docker network inspect addons-773218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0815 17:05:43.536895  298896 cli_runner.go:211] docker network inspect addons-773218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0815 17:05:43.536981  298896 network_create.go:284] running [docker network inspect addons-773218] to gather additional debugging logs...
	I0815 17:05:43.537003  298896 cli_runner.go:164] Run: docker network inspect addons-773218
	W0815 17:05:43.551770  298896 cli_runner.go:211] docker network inspect addons-773218 returned with exit code 1
	I0815 17:05:43.551808  298896 network_create.go:287] error running [docker network inspect addons-773218]: docker network inspect addons-773218: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-773218 not found
	I0815 17:05:43.551823  298896 network_create.go:289] output of [docker network inspect addons-773218]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-773218 not found
	
	** /stderr **
	I0815 17:05:43.551944  298896 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:05:43.568458  298896 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001814870}
	I0815 17:05:43.568515  298896 network_create.go:124] attempt to create docker network addons-773218 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0815 17:05:43.568578  298896 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-773218 addons-773218
	I0815 17:05:43.641453  298896 network_create.go:108] docker network addons-773218 192.168.49.0/24 created
	I0815 17:05:43.641486  298896 kic.go:121] calculated static IP "192.168.49.2" for the "addons-773218" container
	I0815 17:05:43.641557  298896 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0815 17:05:43.656099  298896 cli_runner.go:164] Run: docker volume create addons-773218 --label name.minikube.sigs.k8s.io=addons-773218 --label created_by.minikube.sigs.k8s.io=true
	I0815 17:05:43.672737  298896 oci.go:103] Successfully created a docker volume addons-773218
	I0815 17:05:43.672825  298896 cli_runner.go:164] Run: docker run --rm --name addons-773218-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-773218 --entrypoint /usr/bin/test -v addons-773218:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib
	I0815 17:05:45.828648  298896 cli_runner.go:217] Completed: docker run --rm --name addons-773218-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-773218 --entrypoint /usr/bin/test -v addons-773218:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib: (2.155781068s)
	I0815 17:05:45.828679  298896 oci.go:107] Successfully prepared a docker volume addons-773218
	I0815 17:05:45.828702  298896 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 17:05:45.828722  298896 kic.go:194] Starting extracting preloaded images to volume ...
	I0815 17:05:45.828808  298896 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-773218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir
	I0815 17:05:49.949413  298896 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-773218:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir: (4.120567913s)
	I0815 17:05:49.949445  298896 kic.go:203] duration metric: took 4.120720036s to extract preloaded images to volume ...
	W0815 17:05:49.949569  298896 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0815 17:05:49.949688  298896 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0815 17:05:50.000269  298896 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-773218 --name addons-773218 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-773218 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-773218 --network addons-773218 --ip 192.168.49.2 --volume addons-773218:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002
	I0815 17:05:50.338979  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Running}}
	I0815 17:05:50.359316  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:05:50.385463  298896 cli_runner.go:164] Run: docker exec addons-773218 stat /var/lib/dpkg/alternatives/iptables
	I0815 17:05:50.459843  298896 oci.go:144] the created container "addons-773218" has a running status.
	I0815 17:05:50.459871  298896 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa...
	I0815 17:05:50.705080  298896 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0815 17:05:50.729123  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:05:50.755577  298896 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0815 17:05:50.755599  298896 kic_runner.go:114] Args: [docker exec --privileged addons-773218 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0815 17:05:50.874788  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:05:50.898836  298896 machine.go:93] provisionDockerMachine start ...
	I0815 17:05:50.898928  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:05:50.923796  298896 main.go:141] libmachine: Using SSH client type: native
	I0815 17:05:50.924055  298896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0815 17:05:50.924063  298896 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 17:05:51.104933  298896 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-773218
	
	I0815 17:05:51.104955  298896 ubuntu.go:169] provisioning hostname "addons-773218"
	I0815 17:05:51.105022  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:05:51.128679  298896 main.go:141] libmachine: Using SSH client type: native
	I0815 17:05:51.128941  298896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0815 17:05:51.128959  298896 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-773218 && echo "addons-773218" | sudo tee /etc/hostname
	I0815 17:05:51.285278  298896 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-773218
	
	I0815 17:05:51.285416  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:05:51.306098  298896 main.go:141] libmachine: Using SSH client type: native
	I0815 17:05:51.306400  298896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0815 17:05:51.306419  298896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-773218' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-773218/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-773218' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:05:51.444554  298896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:05:51.444601  298896 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19450-292730/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-292730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-292730/.minikube}
	I0815 17:05:51.444625  298896 ubuntu.go:177] setting up certificates
	I0815 17:05:51.444635  298896 provision.go:84] configureAuth start
	I0815 17:05:51.444709  298896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-773218
	I0815 17:05:51.460930  298896 provision.go:143] copyHostCerts
	I0815 17:05:51.461027  298896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-292730/.minikube/cert.pem (1123 bytes)
	I0815 17:05:51.461196  298896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-292730/.minikube/key.pem (1675 bytes)
	I0815 17:05:51.461275  298896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-292730/.minikube/ca.pem (1082 bytes)
	I0815 17:05:51.461328  298896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-292730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca-key.pem org=jenkins.addons-773218 san=[127.0.0.1 192.168.49.2 addons-773218 localhost minikube]
	I0815 17:05:52.230760  298896 provision.go:177] copyRemoteCerts
	I0815 17:05:52.230829  298896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:05:52.230871  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:05:52.246800  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:05:52.341759  298896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:05:52.365268  298896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0815 17:05:52.388616  298896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 17:05:52.411652  298896 provision.go:87] duration metric: took 967.000494ms to configureAuth
	I0815 17:05:52.411716  298896 ubuntu.go:193] setting minikube options for container-runtime
	I0815 17:05:52.411923  298896 config.go:182] Loaded profile config "addons-773218": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 17:05:52.411937  298896 machine.go:96] duration metric: took 1.513083699s to provisionDockerMachine
	I0815 17:05:52.411945  298896 client.go:171] duration metric: took 10.607412364s to LocalClient.Create
	I0815 17:05:52.411967  298896 start.go:167] duration metric: took 10.607481s to libmachine.API.Create "addons-773218"
	I0815 17:05:52.411978  298896 start.go:293] postStartSetup for "addons-773218" (driver="docker")
	I0815 17:05:52.411987  298896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:05:52.412040  298896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:05:52.412086  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:05:52.429187  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:05:52.526343  298896 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:05:52.529505  298896 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 17:05:52.529546  298896 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 17:05:52.529558  298896 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 17:05:52.529566  298896 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 17:05:52.529578  298896 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-292730/.minikube/addons for local assets ...
	I0815 17:05:52.529655  298896 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-292730/.minikube/files for local assets ...
	I0815 17:05:52.529685  298896 start.go:296] duration metric: took 117.700687ms for postStartSetup
	I0815 17:05:52.530003  298896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-773218
	I0815 17:05:52.545773  298896 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/config.json ...
	I0815 17:05:52.546081  298896 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:05:52.546133  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:05:52.561632  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:05:52.654221  298896 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 17:05:52.658765  298896 start.go:128] duration metric: took 10.856801668s to createHost
	I0815 17:05:52.658792  298896 start.go:83] releasing machines lock for "addons-773218", held for 10.856955481s
	I0815 17:05:52.658871  298896 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-773218
	I0815 17:05:52.675484  298896 ssh_runner.go:195] Run: cat /version.json
	I0815 17:05:52.675542  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:05:52.675786  298896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:05:52.675852  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:05:52.695801  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:05:52.705099  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:05:52.912218  298896 ssh_runner.go:195] Run: systemctl --version
	I0815 17:05:52.916686  298896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 17:05:52.920996  298896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0815 17:05:52.945854  298896 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0815 17:05:52.945941  298896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:05:52.975185  298896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0815 17:05:52.975220  298896 start.go:495] detecting cgroup driver to use...
	I0815 17:05:52.975256  298896 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 17:05:52.975314  298896 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 17:05:52.988154  298896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 17:05:53.000320  298896 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:05:53.000394  298896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:05:53.020872  298896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:05:53.035423  298896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:05:53.123391  298896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:05:53.214439  298896 docker.go:233] disabling docker service ...
	I0815 17:05:53.214503  298896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:05:53.235304  298896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:05:53.246953  298896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:05:53.343887  298896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:05:53.443229  298896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:05:53.454930  298896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:05:53.472011  298896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 17:05:53.482807  298896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 17:05:53.492934  298896 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 17:05:53.493052  298896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 17:05:53.503495  298896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 17:05:53.514238  298896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 17:05:53.526816  298896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 17:05:53.537628  298896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:05:53.547156  298896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 17:05:53.557113  298896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 17:05:53.567464  298896 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 17:05:53.577915  298896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:05:53.586657  298896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:05:53.594967  298896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:05:53.681063  298896 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 17:05:53.812834  298896 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0815 17:05:53.812983  298896 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0815 17:05:53.817143  298896 start.go:563] Will wait 60s for crictl version
	I0815 17:05:53.817251  298896 ssh_runner.go:195] Run: which crictl
	I0815 17:05:53.820605  298896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:05:53.857643  298896 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0815 17:05:53.857805  298896 ssh_runner.go:195] Run: containerd --version
	I0815 17:05:53.879662  298896 ssh_runner.go:195] Run: containerd --version
	I0815 17:05:53.902831  298896 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0815 17:05:53.904714  298896 cli_runner.go:164] Run: docker network inspect addons-773218 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:05:53.919489  298896 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0815 17:05:53.923048  298896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:05:53.933693  298896 kubeadm.go:883] updating cluster {Name:addons-773218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-773218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 17:05:53.933824  298896 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 17:05:53.933892  298896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:05:53.973724  298896 containerd.go:627] all images are preloaded for containerd runtime.
	I0815 17:05:53.973751  298896 containerd.go:534] Images already preloaded, skipping extraction
	I0815 17:05:53.973812  298896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:05:54.014924  298896 containerd.go:627] all images are preloaded for containerd runtime.
	I0815 17:05:54.014948  298896 cache_images.go:84] Images are preloaded, skipping loading
	I0815 17:05:54.014957  298896 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0815 17:05:54.015102  298896 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-773218 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-773218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:05:54.015176  298896 ssh_runner.go:195] Run: sudo crictl info
	I0815 17:05:54.054076  298896 cni.go:84] Creating CNI manager for ""
	I0815 17:05:54.054101  298896 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 17:05:54.054111  298896 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 17:05:54.054158  298896 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-773218 NodeName:addons-773218 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 17:05:54.054329  298896 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-773218"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 17:05:54.054404  298896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:05:54.064135  298896 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:05:54.064209  298896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 17:05:54.073505  298896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0815 17:05:54.092860  298896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:05:54.112233  298896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0815 17:05:54.130780  298896 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0815 17:05:54.134387  298896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:05:54.145488  298896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:05:54.238158  298896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:05:54.252692  298896 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218 for IP: 192.168.49.2
	I0815 17:05:54.252759  298896 certs.go:194] generating shared ca certs ...
	I0815 17:05:54.252789  298896 certs.go:226] acquiring lock for ca certs: {Name:mkb4a15757b6ba038567496d15807eaae760a8a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:54.252966  298896 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-292730/.minikube/ca.key
	I0815 17:05:54.701097  298896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-292730/.minikube/ca.crt ...
	I0815 17:05:54.701138  298896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/ca.crt: {Name:mkf59af40b02926c37aea374f3eb23803697758a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:54.701849  298896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-292730/.minikube/ca.key ...
	I0815 17:05:54.701866  298896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/ca.key: {Name:mkeeea45d83ef33ab832e3fdfed03536bc8ca5d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:54.702356  298896 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.key
	I0815 17:05:55.471426  298896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.crt ...
	I0815 17:05:55.471456  298896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.crt: {Name:mk129e76032b2112f1608a77643d83c0aeff3224 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:55.472123  298896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.key ...
	I0815 17:05:55.472139  298896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.key: {Name:mk1726222706edc5d1e933bf3e7cf2848c57635f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:55.472226  298896 certs.go:256] generating profile certs ...
	I0815 17:05:55.472286  298896 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.key
	I0815 17:05:55.472304  298896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt with IP's: []
	I0815 17:05:55.762745  298896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt ...
	I0815 17:05:55.762777  298896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: {Name:mkada5da779f02732d0385067c7604eb68746720 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:55.762968  298896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.key ...
	I0815 17:05:55.762980  298896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.key: {Name:mk2aeda231829b4c8422fabcd85de0baebfb3f89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:55.763061  298896 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/apiserver.key.5253bf09
	I0815 17:05:55.763084  298896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/apiserver.crt.5253bf09 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0815 17:05:56.078451  298896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/apiserver.crt.5253bf09 ...
	I0815 17:05:56.078482  298896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/apiserver.crt.5253bf09: {Name:mk5c42b854f2d77827d59d9bb4a01361ff9574f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:56.078692  298896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/apiserver.key.5253bf09 ...
	I0815 17:05:56.078710  298896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/apiserver.key.5253bf09: {Name:mk3f5ef726b805e2fd96d02f65f819001b0968aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:56.078800  298896 certs.go:381] copying /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/apiserver.crt.5253bf09 -> /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/apiserver.crt
	I0815 17:05:56.078885  298896 certs.go:385] copying /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/apiserver.key.5253bf09 -> /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/apiserver.key
	I0815 17:05:56.078937  298896 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/proxy-client.key
	I0815 17:05:56.078958  298896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/proxy-client.crt with IP's: []
	I0815 17:05:56.490500  298896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/proxy-client.crt ...
	I0815 17:05:56.490533  298896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/proxy-client.crt: {Name:mka8bd83127768a83c29cdea44f2b0ac4c22d5e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:56.490711  298896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/proxy-client.key ...
	I0815 17:05:56.490730  298896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/proxy-client.key: {Name:mkd2914227070c0cc84d3fed29efe742a2942736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:56.490918  298896 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:05:56.490961  298896 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:05:56.490987  298896 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:05:56.491013  298896 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/key.pem (1675 bytes)
	I0815 17:05:56.491619  298896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:05:56.517529  298896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 17:05:56.547225  298896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:05:56.575426  298896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 17:05:56.601152  298896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0815 17:05:56.625042  298896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 17:05:56.649113  298896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:05:56.672524  298896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:05:56.695923  298896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:05:56.720327  298896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 17:05:56.738043  298896 ssh_runner.go:195] Run: openssl version
	I0815 17:05:56.743504  298896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:05:56.753214  298896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:05:56.756528  298896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:05:56.756588  298896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:05:56.763260  298896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:05:56.772556  298896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:05:56.775773  298896 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:05:56.775841  298896 kubeadm.go:392] StartCluster: {Name:addons-773218 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-773218 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:56.775955  298896 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0815 17:05:56.776015  298896 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 17:05:56.812992  298896 cri.go:89] found id: ""
	I0815 17:05:56.813064  298896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 17:05:56.822017  298896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 17:05:56.831702  298896 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0815 17:05:56.831765  298896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 17:05:56.840664  298896 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 17:05:56.840685  298896 kubeadm.go:157] found existing configuration files:
	
	I0815 17:05:56.840753  298896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 17:05:56.849717  298896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 17:05:56.849779  298896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 17:05:56.858259  298896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 17:05:56.866929  298896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 17:05:56.867016  298896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 17:05:56.875073  298896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 17:05:56.883395  298896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 17:05:56.883456  298896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 17:05:56.891946  298896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 17:05:56.900630  298896 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 17:05:56.900719  298896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 17:05:56.909114  298896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0815 17:05:56.952141  298896 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 17:05:56.952447  298896 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 17:05:56.970582  298896 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0815 17:05:56.970656  298896 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0815 17:05:56.970694  298896 kubeadm.go:310] OS: Linux
	I0815 17:05:56.970743  298896 kubeadm.go:310] CGROUPS_CPU: enabled
	I0815 17:05:56.970793  298896 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0815 17:05:56.970842  298896 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0815 17:05:56.970891  298896 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0815 17:05:56.970942  298896 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0815 17:05:56.970992  298896 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0815 17:05:56.971039  298896 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0815 17:05:56.971090  298896 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0815 17:05:56.971139  298896 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0815 17:05:57.032001  298896 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 17:05:57.032194  298896 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 17:05:57.032354  298896 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 17:05:57.038233  298896 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 17:05:57.042150  298896 out.go:235]   - Generating certificates and keys ...
	I0815 17:05:57.042310  298896 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 17:05:57.042397  298896 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 17:05:57.292912  298896 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 17:05:57.807669  298896 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 17:05:58.086312  298896 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 17:05:58.426684  298896 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 17:05:58.819355  298896 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 17:05:58.819863  298896 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-773218 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 17:05:59.201027  298896 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 17:05:59.201309  298896 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-773218 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0815 17:05:59.967071  298896 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 17:06:01.288731  298896 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 17:06:02.154794  298896 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 17:06:02.155027  298896 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 17:06:02.566988  298896 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 17:06:03.727558  298896 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 17:06:03.994863  298896 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 17:06:04.120055  298896 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 17:06:04.345971  298896 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 17:06:04.346862  298896 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 17:06:04.349965  298896 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 17:06:04.352779  298896 out.go:235]   - Booting up control plane ...
	I0815 17:06:04.352888  298896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 17:06:04.352962  298896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 17:06:04.353536  298896 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 17:06:04.364333  298896 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 17:06:04.369822  298896 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 17:06:04.370170  298896 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 17:06:04.468538  298896 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 17:06:04.468654  298896 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 17:06:05.470159  298896 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001686171s
	I0815 17:06:05.470251  298896 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 17:06:11.972140  298896 kubeadm.go:310] [api-check] The API server is healthy after 6.501982606s
	I0815 17:06:11.993399  298896 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0815 17:06:12.020271  298896 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0815 17:06:12.053999  298896 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0815 17:06:12.054215  298896 kubeadm.go:310] [mark-control-plane] Marking the node addons-773218 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0815 17:06:12.070656  298896 kubeadm.go:310] [bootstrap-token] Using token: 65ual8.bvyckroxy0yc90rt
	I0815 17:06:12.072918  298896 out.go:235]   - Configuring RBAC rules ...
	I0815 17:06:12.073045  298896 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0815 17:06:12.085932  298896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0815 17:06:12.098003  298896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0815 17:06:12.102827  298896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0815 17:06:12.107916  298896 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0815 17:06:12.113274  298896 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0815 17:06:12.380889  298896 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0815 17:06:12.806259  298896 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0815 17:06:13.379050  298896 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0815 17:06:13.380571  298896 kubeadm.go:310] 
	I0815 17:06:13.380651  298896 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0815 17:06:13.380661  298896 kubeadm.go:310] 
	I0815 17:06:13.380742  298896 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0815 17:06:13.380752  298896 kubeadm.go:310] 
	I0815 17:06:13.380776  298896 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0815 17:06:13.380840  298896 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0815 17:06:13.380905  298896 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0815 17:06:13.380913  298896 kubeadm.go:310] 
	I0815 17:06:13.380987  298896 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0815 17:06:13.380997  298896 kubeadm.go:310] 
	I0815 17:06:13.381047  298896 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0815 17:06:13.381056  298896 kubeadm.go:310] 
	I0815 17:06:13.381112  298896 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0815 17:06:13.381222  298896 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0815 17:06:13.381303  298896 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0815 17:06:13.381312  298896 kubeadm.go:310] 
	I0815 17:06:13.381408  298896 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0815 17:06:13.381486  298896 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0815 17:06:13.381498  298896 kubeadm.go:310] 
	I0815 17:06:13.381585  298896 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 65ual8.bvyckroxy0yc90rt \
	I0815 17:06:13.381697  298896 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9d903fedf847e1632fbd88707579b10a2d0dc96fbe5953a96f80401ed3ad9bfe \
	I0815 17:06:13.381725  298896 kubeadm.go:310] 	--control-plane 
	I0815 17:06:13.381733  298896 kubeadm.go:310] 
	I0815 17:06:13.381818  298896 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0815 17:06:13.381827  298896 kubeadm.go:310] 
	I0815 17:06:13.381906  298896 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 65ual8.bvyckroxy0yc90rt \
	I0815 17:06:13.382008  298896 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9d903fedf847e1632fbd88707579b10a2d0dc96fbe5953a96f80401ed3ad9bfe 
	I0815 17:06:13.385328  298896 kubeadm.go:310] W0815 17:05:56.948582    1024 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:06:13.385635  298896 kubeadm.go:310] W0815 17:05:56.949640    1024 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0815 17:06:13.385866  298896 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0815 17:06:13.385983  298896 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0815 17:06:13.386012  298896 cni.go:84] Creating CNI manager for ""
	I0815 17:06:13.386024  298896 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 17:06:13.388323  298896 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0815 17:06:13.390845  298896 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0815 17:06:13.394896  298896 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0815 17:06:13.394917  298896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0815 17:06:13.413403  298896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0815 17:06:13.685053  298896 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0815 17:06:13.685210  298896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:06:13.685312  298896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-773218 minikube.k8s.io/updated_at=2024_08_15T17_06_13_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7 minikube.k8s.io/name=addons-773218 minikube.k8s.io/primary=true
	I0815 17:06:13.867936  298896 ops.go:34] apiserver oom_adj: -16
	I0815 17:06:13.868047  298896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:06:14.368715  298896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:06:14.868258  298896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:06:15.368142  298896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:06:15.868177  298896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:06:16.368186  298896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:06:16.868870  298896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:06:17.368190  298896 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0815 17:06:17.460218  298896 kubeadm.go:1113] duration metric: took 3.775052376s to wait for elevateKubeSystemPrivileges
	I0815 17:06:17.460255  298896 kubeadm.go:394] duration metric: took 20.684440061s to StartCluster
	I0815 17:06:17.460275  298896 settings.go:142] acquiring lock: {Name:mk45ce81b4bf65b6cbcfdad87d2da5b14c3b063e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:17.460949  298896 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-292730/kubeconfig
	I0815 17:06:17.461369  298896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/kubeconfig: {Name:mkdfbda4e28d6fa44e652363c57a1f0d4206cf57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:06:17.461581  298896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0815 17:06:17.461612  298896 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0815 17:06:17.461860  298896 config.go:182] Loaded profile config "addons-773218": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 17:06:17.461893  298896 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0815 17:06:17.461975  298896 addons.go:69] Setting yakd=true in profile "addons-773218"
	I0815 17:06:17.461996  298896 addons.go:234] Setting addon yakd=true in "addons-773218"
	I0815 17:06:17.462020  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.462479  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.462962  298896 addons.go:69] Setting metrics-server=true in profile "addons-773218"
	I0815 17:06:17.462976  298896 addons.go:69] Setting registry=true in profile "addons-773218"
	I0815 17:06:17.462994  298896 addons.go:234] Setting addon metrics-server=true in "addons-773218"
	I0815 17:06:17.463005  298896 addons.go:234] Setting addon registry=true in "addons-773218"
	I0815 17:06:17.463026  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.463029  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.463417  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.463489  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.462968  298896 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-773218"
	I0815 17:06:17.465625  298896 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-773218"
	I0815 17:06:17.465664  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.466136  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.465297  298896 addons.go:69] Setting cloud-spanner=true in profile "addons-773218"
	I0815 17:06:17.470520  298896 addons.go:234] Setting addon cloud-spanner=true in "addons-773218"
	I0815 17:06:17.470562  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.471119  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.465310  298896 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-773218"
	I0815 17:06:17.476226  298896 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-773218"
	I0815 17:06:17.476291  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.476889  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.465320  298896 addons.go:69] Setting default-storageclass=true in profile "addons-773218"
	I0815 17:06:17.497573  298896 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-773218"
	I0815 17:06:17.497925  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.465324  298896 addons.go:69] Setting gcp-auth=true in profile "addons-773218"
	I0815 17:06:17.513011  298896 mustload.go:65] Loading cluster: addons-773218
	I0815 17:06:17.513267  298896 config.go:182] Loaded profile config "addons-773218": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 17:06:17.513604  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.465327  298896 addons.go:69] Setting ingress=true in profile "addons-773218"
	I0815 17:06:17.533395  298896 addons.go:234] Setting addon ingress=true in "addons-773218"
	I0815 17:06:17.533472  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.533969  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.465332  298896 addons.go:69] Setting ingress-dns=true in profile "addons-773218"
	I0815 17:06:17.541930  298896 addons.go:234] Setting addon ingress-dns=true in "addons-773218"
	I0815 17:06:17.541979  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.542579  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.550256  298896 out.go:177]   - Using image docker.io/registry:2.8.3
	I0815 17:06:17.465336  298896 addons.go:69] Setting inspektor-gadget=true in profile "addons-773218"
	I0815 17:06:17.552924  298896 addons.go:234] Setting addon inspektor-gadget=true in "addons-773218"
	I0815 17:06:17.552995  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.553538  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.465344  298896 out.go:177] * Verifying Kubernetes components...
	I0815 17:06:17.465522  298896 addons.go:69] Setting volcano=true in profile "addons-773218"
	I0815 17:06:17.465528  298896 addons.go:69] Setting storage-provisioner=true in profile "addons-773218"
	I0815 17:06:17.465532  298896 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-773218"
	I0815 17:06:17.465569  298896 addons.go:69] Setting volumesnapshots=true in profile "addons-773218"
	I0815 17:06:17.583352  298896 addons.go:234] Setting addon volcano=true in "addons-773218"
	I0815 17:06:17.583405  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.584337  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.596640  298896 addons.go:234] Setting addon storage-provisioner=true in "addons-773218"
	I0815 17:06:17.596697  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.597244  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.606523  298896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:06:17.607836  298896 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0815 17:06:17.607906  298896 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0815 17:06:17.608790  298896 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0815 17:06:17.616963  298896 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0815 17:06:17.617423  298896 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0815 17:06:17.617446  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0815 17:06:17.617512  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:17.626634  298896 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0815 17:06:17.626663  298896 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0815 17:06:17.626752  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:17.609320  298896 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-773218"
	I0815 17:06:17.640930  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.651342  298896 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0815 17:06:17.651366  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0815 17:06:17.651439  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:17.609343  298896 addons.go:234] Setting addon volumesnapshots=true in "addons-773218"
	I0815 17:06:17.661181  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.661233  298896 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 17:06:17.661259  298896 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 17:06:17.661320  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:17.662870  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.668625  298896 addons.go:234] Setting addon default-storageclass=true in "addons-773218"
	I0815 17:06:17.668666  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.669087  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.716426  298896 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0815 17:06:17.718681  298896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0815 17:06:17.718914  298896 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 17:06:17.718928  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0815 17:06:17.718998  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:17.723336  298896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0815 17:06:17.725294  298896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0815 17:06:17.727211  298896 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0815 17:06:17.729294  298896 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0815 17:06:17.731683  298896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0815 17:06:17.733427  298896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0815 17:06:17.740414  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.742415  298896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0815 17:06:17.742511  298896 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0815 17:06:17.746760  298896 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0815 17:06:17.746947  298896 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0815 17:06:17.746963  298896 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0815 17:06:17.747039  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:17.765624  298896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0815 17:06:17.768407  298896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:06:17.772588  298896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:06:17.774493  298896 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 17:06:17.774529  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0815 17:06:17.774598  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:17.789356  298896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0815 17:06:17.789382  298896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0815 17:06:17.789450  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:17.817302  298896 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 17:06:17.817476  298896 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0815 17:06:17.822159  298896 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0815 17:06:17.822554  298896 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:06:17.822579  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 17:06:17.822642  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:17.841640  298896 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 17:06:17.841661  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0815 17:06:17.841725  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:17.862771  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:17.863427  298896 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0815 17:06:17.865827  298896 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0815 17:06:17.867481  298896 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-773218"
	I0815 17:06:17.867522  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:17.867958  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:17.871015  298896 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0815 17:06:17.871073  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0815 17:06:17.871176  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:17.893647  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:17.915216  298896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:06:17.949366  298896 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0815 17:06:17.953482  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:17.953610  298896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0815 17:06:17.953643  298896 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0815 17:06:17.953717  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:17.970547  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:17.977531  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:18.018205  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:18.024472  298896 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 17:06:18.024495  298896 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 17:06:18.024558  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:18.057091  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:18.085176  298896 out.go:177]   - Using image docker.io/busybox:stable
	I0815 17:06:18.085378  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:18.089355  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:18.092631  298896 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0815 17:06:18.093731  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:18.094451  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:18.094601  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:18.095134  298896 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 17:06:18.095151  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0815 17:06:18.095258  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	W0815 17:06:18.109448  298896 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0815 17:06:18.109480  298896 retry.go:31] will retry after 163.276464ms: ssh: handshake failed: EOF
	W0815 17:06:18.109546  298896 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0815 17:06:18.109555  298896 retry.go:31] will retry after 288.725911ms: ssh: handshake failed: EOF
	I0815 17:06:18.128113  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	W0815 17:06:18.134346  298896 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0815 17:06:18.134375  298896 retry.go:31] will retry after 292.839748ms: ssh: handshake failed: EOF
	I0815 17:06:18.138487  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:18.498330  298896 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0815 17:06:18.498357  298896 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0815 17:06:18.682035  298896 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0815 17:06:18.682061  298896 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0815 17:06:18.746900  298896 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 17:06:18.746922  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0815 17:06:18.882453  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0815 17:06:18.895505  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:06:18.906080  298896 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0815 17:06:18.906105  298896 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0815 17:06:18.907974  298896 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0815 17:06:18.907996  298896 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0815 17:06:18.977807  298896 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0815 17:06:18.977833  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0815 17:06:18.979532  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:06:18.985585  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0815 17:06:18.993619  298896 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0815 17:06:18.993644  298896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0815 17:06:19.001721  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0815 17:06:19.028224  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0815 17:06:19.039003  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0815 17:06:19.075075  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0815 17:06:19.082219  298896 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0815 17:06:19.082241  298896 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0815 17:06:19.089301  298896 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 17:06:19.089327  298896 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 17:06:19.217158  298896 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0815 17:06:19.217181  298896 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0815 17:06:19.227431  298896 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0815 17:06:19.227499  298896 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0815 17:06:19.236851  298896 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0815 17:06:19.236925  298896 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0815 17:06:19.241024  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0815 17:06:19.317443  298896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0815 17:06:19.317514  298896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0815 17:06:19.376986  298896 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0815 17:06:19.377091  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0815 17:06:19.455669  298896 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0815 17:06:19.455748  298896 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0815 17:06:19.457452  298896 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:06:19.457604  298896 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 17:06:19.522955  298896 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0815 17:06:19.523025  298896 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0815 17:06:19.721054  298896 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0815 17:06:19.721153  298896 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0815 17:06:19.723700  298896 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.808455004s)
	I0815 17:06:19.723772  298896 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.981228392s)
	I0815 17:06:19.723881  298896 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0815 17:06:19.725238  298896 node_ready.go:35] waiting up to 6m0s for node "addons-773218" to be "Ready" ...
	I0815 17:06:19.730017  298896 node_ready.go:49] node "addons-773218" has status "Ready":"True"
	I0815 17:06:19.730091  298896 node_ready.go:38] duration metric: took 4.781792ms for node "addons-773218" to be "Ready" ...
	I0815 17:06:19.730115  298896 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:06:19.740638  298896 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mxljc" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:19.743905  298896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0815 17:06:19.743967  298896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0815 17:06:19.752784  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0815 17:06:19.853064  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:06:19.891681  298896 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0815 17:06:19.891757  298896 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0815 17:06:20.104553  298896 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:06:20.104627  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0815 17:06:20.185935  298896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0815 17:06:20.186008  298896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0815 17:06:20.230785  298896 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-773218" context rescaled to 1 replicas
	I0815 17:06:20.373990  298896 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0815 17:06:20.374016  298896 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0815 17:06:20.615111  298896 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0815 17:06:20.615140  298896 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0815 17:06:20.645475  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:06:20.817258  298896 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0815 17:06:20.817287  298896 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0815 17:06:21.062071  298896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0815 17:06:21.062149  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0815 17:06:21.205329  298896 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 17:06:21.205398  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0815 17:06:21.343137  298896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0815 17:06:21.343215  298896 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0815 17:06:21.515051  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0815 17:06:21.614277  298896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0815 17:06:21.614351  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0815 17:06:21.748828  298896 pod_ready.go:103] pod "coredns-6f6b679f8f-mxljc" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:22.008087  298896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0815 17:06:22.008177  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0815 17:06:22.386325  298896 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 17:06:22.386394  298896 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0815 17:06:22.535984  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0815 17:06:23.752010  298896 pod_ready.go:103] pod "coredns-6f6b679f8f-mxljc" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:24.995420  298896 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0815 17:06:24.995545  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:25.027144  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:25.620598  298896 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0815 17:06:25.784529  298896 addons.go:234] Setting addon gcp-auth=true in "addons-773218"
	I0815 17:06:25.784577  298896 host.go:66] Checking if "addons-773218" exists ...
	I0815 17:06:25.785051  298896 cli_runner.go:164] Run: docker container inspect addons-773218 --format={{.State.Status}}
	I0815 17:06:25.809590  298896 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0815 17:06:25.809646  298896 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-773218
	I0815 17:06:25.835545  298896 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/addons-773218/id_rsa Username:docker}
	I0815 17:06:26.286264  298896 pod_ready.go:103] pod "coredns-6f6b679f8f-mxljc" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:27.991899  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.109410661s)
	I0815 17:06:27.992000  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.096472743s)
	I0815 17:06:27.992234  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.012679732s)
	I0815 17:06:27.992303  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.00669259s)
	I0815 17:06:27.992445  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.990698468s)
	I0815 17:06:27.992473  298896 addons.go:475] Verifying addon ingress=true in "addons-773218"
	I0815 17:06:27.992515  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.964263165s)
	I0815 17:06:27.992562  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.953538293s)
	I0815 17:06:27.992664  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.239814191s)
	I0815 17:06:27.992612  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.917516655s)
	I0815 17:06:27.992993  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.139852815s)
	I0815 17:06:27.993011  298896 addons.go:475] Verifying addon metrics-server=true in "addons-773218"
	I0815 17:06:27.993102  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.347587209s)
	W0815 17:06:27.993123  298896 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 17:06:27.993159  298896 retry.go:31] will retry after 162.153167ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0815 17:06:27.993231  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.478131295s)
	I0815 17:06:27.992633  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.751549783s)
	I0815 17:06:27.993489  298896 addons.go:475] Verifying addon registry=true in "addons-773218"
	I0815 17:06:28.000873  298896 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-773218 service yakd-dashboard -n yakd-dashboard
	
	I0815 17:06:28.002134  298896 out.go:177] * Verifying ingress addon...
	I0815 17:06:28.002211  298896 out.go:177] * Verifying registry addon...
	I0815 17:06:28.006291  298896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0815 17:06:28.007472  298896 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	W0815 17:06:28.043857  298896 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0815 17:06:28.047384  298896 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0815 17:06:28.047405  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:28.048339  298896 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0815 17:06:28.048348  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:28.155674  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0815 17:06:28.511530  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:28.513418  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:28.771388  298896 pod_ready.go:103] pod "coredns-6f6b679f8f-mxljc" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:28.953661  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.417574366s)
	I0815 17:06:28.953702  298896 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-773218"
	I0815 17:06:28.953924  298896 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.14430281s)
	I0815 17:06:28.957000  298896 out.go:177] * Verifying csi-hostpath-driver addon...
	I0815 17:06:28.957176  298896 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0815 17:06:28.960246  298896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0815 17:06:28.962162  298896 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0815 17:06:28.963952  298896 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0815 17:06:28.964002  298896 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0815 17:06:28.999023  298896 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0815 17:06:28.999048  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:29.014276  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:29.015044  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:29.055199  298896 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0815 17:06:29.055224  298896 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0815 17:06:29.122696  298896 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 17:06:29.122723  298896 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0815 17:06:29.208448  298896 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0815 17:06:29.465239  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:29.510665  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:29.512616  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:29.893716  298896 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.737947172s)
	I0815 17:06:29.965255  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:30.068979  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:30.070820  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:30.203363  298896 addons.go:475] Verifying addon gcp-auth=true in "addons-773218"
	I0815 17:06:30.206319  298896 out.go:177] * Verifying gcp-auth addon...
	I0815 17:06:30.209524  298896 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0815 17:06:30.212929  298896 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 17:06:30.466744  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:30.567740  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:30.567882  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:30.965430  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:31.014080  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:31.015704  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:31.247475  298896 pod_ready.go:103] pod "coredns-6f6b679f8f-mxljc" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:31.466257  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:31.514469  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:31.516014  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:31.965523  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:32.066608  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:32.066942  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:32.470044  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:32.574037  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:32.574398  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:32.973895  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:33.016920  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:33.018661  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:33.466437  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:33.511678  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:33.514635  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:33.747706  298896 pod_ready.go:103] pod "coredns-6f6b679f8f-mxljc" in "kube-system" namespace has status "Ready":"False"
	I0815 17:06:33.966734  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:34.071315  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:34.072387  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:34.465313  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:34.510444  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:34.514321  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:34.971197  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:35.019074  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:35.020702  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:35.249052  298896 pod_ready.go:93] pod "coredns-6f6b679f8f-mxljc" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:35.249080  298896 pod_ready.go:82] duration metric: took 15.50836637s for pod "coredns-6f6b679f8f-mxljc" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:35.249096  298896 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-z2x5k" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:35.251938  298896 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-z2x5k" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-z2x5k" not found
	I0815 17:06:35.251970  298896 pod_ready.go:82] duration metric: took 2.867232ms for pod "coredns-6f6b679f8f-z2x5k" in "kube-system" namespace to be "Ready" ...
	E0815 17:06:35.251999  298896 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-z2x5k" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-z2x5k" not found
	I0815 17:06:35.252009  298896 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-773218" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:35.260418  298896 pod_ready.go:93] pod "etcd-addons-773218" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:35.260454  298896 pod_ready.go:82] duration metric: took 8.433746ms for pod "etcd-addons-773218" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:35.260468  298896 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-773218" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:35.268726  298896 pod_ready.go:93] pod "kube-apiserver-addons-773218" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:35.268763  298896 pod_ready.go:82] duration metric: took 8.286947ms for pod "kube-apiserver-addons-773218" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:35.268781  298896 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-773218" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:35.277220  298896 pod_ready.go:93] pod "kube-controller-manager-addons-773218" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:35.277241  298896 pod_ready.go:82] duration metric: took 8.452806ms for pod "kube-controller-manager-addons-773218" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:35.277252  298896 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k8hj5" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:35.445241  298896 pod_ready.go:93] pod "kube-proxy-k8hj5" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:35.445321  298896 pod_ready.go:82] duration metric: took 168.060071ms for pod "kube-proxy-k8hj5" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:35.445347  298896 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-773218" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:35.468677  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:35.515040  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:35.516990  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:35.844288  298896 pod_ready.go:93] pod "kube-scheduler-addons-773218" in "kube-system" namespace has status "Ready":"True"
	I0815 17:06:35.844310  298896 pod_ready.go:82] duration metric: took 398.916072ms for pod "kube-scheduler-addons-773218" in "kube-system" namespace to be "Ready" ...
	I0815 17:06:35.844320  298896 pod_ready.go:39] duration metric: took 16.114164576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:06:35.844335  298896 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:06:35.844404  298896 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:06:35.877196  298896 api_server.go:72] duration metric: took 18.41555164s to wait for apiserver process to appear ...
	I0815 17:06:35.877222  298896 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:06:35.877242  298896 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0815 17:06:35.886079  298896 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0815 17:06:35.887291  298896 api_server.go:141] control plane version: v1.31.0
	I0815 17:06:35.887318  298896 api_server.go:131] duration metric: took 10.089163ms to wait for apiserver health ...
	I0815 17:06:35.887327  298896 system_pods.go:43] waiting for kube-system pods to appear ...
	I0815 17:06:35.970966  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:36.013912  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:36.017142  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:36.074008  298896 system_pods.go:59] 18 kube-system pods found
	I0815 17:06:36.074096  298896 system_pods.go:61] "coredns-6f6b679f8f-mxljc" [360ebd89-2720-4993-b576-6320ce724817] Running
	I0815 17:06:36.074133  298896 system_pods.go:61] "csi-hostpath-attacher-0" [f07712f1-d9a4-4dd6-91bc-02c38dbb4c72] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0815 17:06:36.074168  298896 system_pods.go:61] "csi-hostpath-resizer-0" [271cd072-8281-46c7-b861-cdaeaa08e261] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0815 17:06:36.074199  298896 system_pods.go:61] "csi-hostpathplugin-scjl2" [0abbcb20-de6a-41ee-b998-a1bbfebba749] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0815 17:06:36.074220  298896 system_pods.go:61] "etcd-addons-773218" [1185bd8e-5158-4566-b8df-61a65c7178d7] Running
	I0815 17:06:36.074240  298896 system_pods.go:61] "kindnet-mcdhn" [67b3f7d2-6e2f-4282-9a2f-6fa91b4a0772] Running
	I0815 17:06:36.074258  298896 system_pods.go:61] "kube-apiserver-addons-773218" [309da903-ea95-42ec-92bb-64c224455a4b] Running
	I0815 17:06:36.074289  298896 system_pods.go:61] "kube-controller-manager-addons-773218" [a2a40d50-5802-4d58-8733-3c82f03c0b98] Running
	I0815 17:06:36.074309  298896 system_pods.go:61] "kube-ingress-dns-minikube" [8de7eb7d-0799-4ae6-8b43-27fd1dc86de8] Running
	I0815 17:06:36.074327  298896 system_pods.go:61] "kube-proxy-k8hj5" [df38a928-679d-4993-bba8-1c468810e624] Running
	I0815 17:06:36.074347  298896 system_pods.go:61] "kube-scheduler-addons-773218" [a9084f8c-57aa-4305-885c-f4f6bfb90ef7] Running
	I0815 17:06:36.074378  298896 system_pods.go:61] "metrics-server-8988944d9-pbx6n" [1ee3a754-0bfc-4950-bfb9-8f7863cb518e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 17:06:36.074409  298896 system_pods.go:61] "nvidia-device-plugin-daemonset-jm8xf" [3767faf1-4959-47be-99ef-741d4904feca] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0815 17:06:36.074430  298896 system_pods.go:61] "registry-6fb4cdfc84-t6znz" [d6170f0f-298c-44dc-bd48-48bc98e610d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0815 17:06:36.074451  298896 system_pods.go:61] "registry-proxy-2294p" [f8ae5fc1-3cb4-4610-b95a-966036ad420a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0815 17:06:36.074490  298896 system_pods.go:61] "snapshot-controller-56fcc65765-dtlsr" [199cf90f-af7a-4bdd-9f02-fec55a70ebcf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 17:06:36.074515  298896 system_pods.go:61] "snapshot-controller-56fcc65765-dv7kx" [bc16dcc9-a541-43ea-b11c-105ed0c49793] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 17:06:36.074533  298896 system_pods.go:61] "storage-provisioner" [ac98eb2c-1786-4a84-a9cd-51336d941465] Running
	I0815 17:06:36.074555  298896 system_pods.go:74] duration metric: took 187.22064ms to wait for pod list to return data ...
	I0815 17:06:36.074588  298896 default_sa.go:34] waiting for default service account to be created ...
	I0815 17:06:36.244743  298896 default_sa.go:45] found service account: "default"
	I0815 17:06:36.244816  298896 default_sa.go:55] duration metric: took 170.204267ms for default service account to be created ...
	I0815 17:06:36.244842  298896 system_pods.go:116] waiting for k8s-apps to be running ...
	I0815 17:06:36.454198  298896 system_pods.go:86] 18 kube-system pods found
	I0815 17:06:36.454236  298896 system_pods.go:89] "coredns-6f6b679f8f-mxljc" [360ebd89-2720-4993-b576-6320ce724817] Running
	I0815 17:06:36.454248  298896 system_pods.go:89] "csi-hostpath-attacher-0" [f07712f1-d9a4-4dd6-91bc-02c38dbb4c72] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0815 17:06:36.454371  298896 system_pods.go:89] "csi-hostpath-resizer-0" [271cd072-8281-46c7-b861-cdaeaa08e261] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0815 17:06:36.454392  298896 system_pods.go:89] "csi-hostpathplugin-scjl2" [0abbcb20-de6a-41ee-b998-a1bbfebba749] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0815 17:06:36.454398  298896 system_pods.go:89] "etcd-addons-773218" [1185bd8e-5158-4566-b8df-61a65c7178d7] Running
	I0815 17:06:36.454405  298896 system_pods.go:89] "kindnet-mcdhn" [67b3f7d2-6e2f-4282-9a2f-6fa91b4a0772] Running
	I0815 17:06:36.454415  298896 system_pods.go:89] "kube-apiserver-addons-773218" [309da903-ea95-42ec-92bb-64c224455a4b] Running
	I0815 17:06:36.454421  298896 system_pods.go:89] "kube-controller-manager-addons-773218" [a2a40d50-5802-4d58-8733-3c82f03c0b98] Running
	I0815 17:06:36.454442  298896 system_pods.go:89] "kube-ingress-dns-minikube" [8de7eb7d-0799-4ae6-8b43-27fd1dc86de8] Running
	I0815 17:06:36.454455  298896 system_pods.go:89] "kube-proxy-k8hj5" [df38a928-679d-4993-bba8-1c468810e624] Running
	I0815 17:06:36.454461  298896 system_pods.go:89] "kube-scheduler-addons-773218" [a9084f8c-57aa-4305-885c-f4f6bfb90ef7] Running
	I0815 17:06:36.454484  298896 system_pods.go:89] "metrics-server-8988944d9-pbx6n" [1ee3a754-0bfc-4950-bfb9-8f7863cb518e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0815 17:06:36.454500  298896 system_pods.go:89] "nvidia-device-plugin-daemonset-jm8xf" [3767faf1-4959-47be-99ef-741d4904feca] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0815 17:06:36.454513  298896 system_pods.go:89] "registry-6fb4cdfc84-t6znz" [d6170f0f-298c-44dc-bd48-48bc98e610d4] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0815 17:06:36.454519  298896 system_pods.go:89] "registry-proxy-2294p" [f8ae5fc1-3cb4-4610-b95a-966036ad420a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0815 17:06:36.454526  298896 system_pods.go:89] "snapshot-controller-56fcc65765-dtlsr" [199cf90f-af7a-4bdd-9f02-fec55a70ebcf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 17:06:36.454535  298896 system_pods.go:89] "snapshot-controller-56fcc65765-dv7kx" [bc16dcc9-a541-43ea-b11c-105ed0c49793] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0815 17:06:36.454543  298896 system_pods.go:89] "storage-provisioner" [ac98eb2c-1786-4a84-a9cd-51336d941465] Running
	I0815 17:06:36.454562  298896 system_pods.go:126] duration metric: took 209.699509ms to wait for k8s-apps to be running ...
	I0815 17:06:36.454577  298896 system_svc.go:44] waiting for kubelet service to be running ....
	I0815 17:06:36.454643  298896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:06:36.465743  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:36.474701  298896 system_svc.go:56] duration metric: took 20.112503ms WaitForService to wait for kubelet
	I0815 17:06:36.474787  298896 kubeadm.go:582] duration metric: took 19.013136814s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:06:36.474825  298896 node_conditions.go:102] verifying NodePressure condition ...
	I0815 17:06:36.512178  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:36.513790  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:36.645703  298896 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0815 17:06:36.645747  298896 node_conditions.go:123] node cpu capacity is 2
	I0815 17:06:36.645760  298896 node_conditions.go:105] duration metric: took 170.908554ms to run NodePressure ...
	I0815 17:06:36.645773  298896 start.go:241] waiting for startup goroutines ...
	I0815 17:06:36.966307  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:37.016341  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:37.021027  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:37.465833  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:37.511887  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:37.513120  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:37.966271  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:38.019353  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:38.021026  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:38.465225  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:38.511491  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:38.512649  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:38.966281  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:39.067946  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:39.069436  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:39.465833  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:39.510431  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:39.513257  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:39.967449  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:40.016532  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:40.017432  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:40.465796  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:40.514578  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:40.515258  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:40.965834  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:41.014295  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:41.015876  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:41.465677  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:41.510680  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:41.512567  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:41.965394  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:42.015603  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:42.016772  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:42.465716  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:42.511942  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:42.512877  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:42.964971  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:43.012047  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:43.013806  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:43.465734  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:43.511388  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:43.511924  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:43.968760  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:44.016494  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:44.021362  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:44.465656  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:44.513514  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:44.513832  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:44.966091  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:45.033523  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:45.037168  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:45.465342  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:45.511524  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:45.513368  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:45.964677  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:46.015860  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:46.016367  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:46.465586  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:46.512267  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:46.514286  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:46.996051  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:47.014663  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:47.016340  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:47.467768  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:47.520615  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:47.522519  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:47.965622  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:48.011055  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:48.015246  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:48.467445  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:48.512581  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:48.513201  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:48.965646  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:49.012717  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:49.014173  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:49.467823  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:49.510021  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:49.512833  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:49.966016  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:50.015778  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:50.018865  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:50.464891  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:50.528956  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:50.529548  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:50.965030  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:51.013224  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:51.014176  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:51.465911  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:51.567308  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:51.568556  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:51.965302  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:52.013353  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0815 17:06:52.014395  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:52.464905  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:52.511260  298896 kapi.go:107] duration metric: took 24.50497075s to wait for kubernetes.io/minikube-addons=registry ...
	I0815 17:06:52.513057  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:52.965420  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:53.012918  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:53.465195  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:53.512256  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:53.972220  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:54.041146  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:54.466328  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:54.568647  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:54.966130  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:55.013773  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:55.469860  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:55.515684  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:55.968911  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:56.015811  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:56.466835  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:56.567101  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:56.967772  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:57.068587  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:57.472763  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:57.573841  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:57.965741  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:58.013366  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:58.465116  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:58.512561  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:58.964779  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:59.012192  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:59.467791  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:06:59.512247  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:06:59.965792  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:00.035653  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:00.465890  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:00.567591  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:00.966660  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:01.013004  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:01.465828  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:01.566093  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:01.965249  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:02.012655  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:02.466322  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:02.512706  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:02.965022  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:03.012778  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:03.465681  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:03.511519  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:03.966887  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:04.012233  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:04.465502  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:04.513517  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:04.964389  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:05.012954  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:05.468307  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:05.512054  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:05.964946  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:06.013497  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:06.465162  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:06.512361  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:06.965502  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:07.011920  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:07.472128  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:07.516949  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:07.966427  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:08.012924  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:08.465922  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:08.512162  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:08.969184  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:09.012670  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:09.466145  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:09.512955  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:09.965622  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:10.014287  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:10.467369  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:10.512255  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:10.965109  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:11.013449  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:11.464861  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:11.512571  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:11.965947  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:12.013401  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:12.468176  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:12.514072  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:12.965895  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:13.012794  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:13.465186  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:13.512004  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:13.965541  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:14.014565  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:14.466193  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:14.512991  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:14.965408  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:15.066762  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:15.485900  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:15.512346  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:15.965953  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:16.066774  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:16.466607  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:16.513249  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:16.964773  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:17.012719  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:17.468284  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:17.512716  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:17.965816  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:18.014784  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:18.468436  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:18.512566  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:18.964837  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:19.012240  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:19.465035  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:19.512547  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:19.966120  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:20.014064  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:20.467761  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0815 17:07:20.515321  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:20.965404  298896 kapi.go:107] duration metric: took 52.005165145s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0815 17:07:21.012233  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:21.512115  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:22.012030  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:22.512605  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:23.012512  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:23.511434  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:24.014669  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:24.512064  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:25.012484  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:25.512348  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:26.014147  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:26.511736  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:27.021254  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:27.512343  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:28.013429  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:28.511992  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:29.012267  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:29.512892  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:30.025791  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:30.513268  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:31.013601  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:31.511539  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:32.013036  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:32.512447  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:33.013611  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:33.512338  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:34.018367  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:34.512694  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:35.013402  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:35.512774  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:36.013606  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:36.512539  298896 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0815 17:07:37.013847  298896 kapi.go:107] duration metric: took 1m9.006364296s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0815 17:07:53.213546  298896 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0815 17:07:53.213566  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:53.712884  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:54.213963  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:54.713620  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:55.213778  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:55.713531  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:56.213511  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:56.712816  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:57.213443  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:57.713547  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:58.212996  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:58.712615  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:59.213535  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:07:59.713543  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:00.215214  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:00.712549  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:01.214117  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:01.713381  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:02.215036  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:02.712741  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:03.213093  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:03.712682  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:04.214085  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:04.720106  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:05.212950  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:05.712935  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:06.213044  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:06.713862  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:07.213229  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:07.713543  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:08.213312  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:08.713015  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:09.213376  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:09.712933  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:10.213204  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:10.713392  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:11.213169  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:11.714188  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:12.213430  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:12.713446  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:13.213313  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:13.713539  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:14.213250  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:14.713271  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:15.213730  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:15.713327  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:16.213079  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:16.712697  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:17.213728  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:17.713359  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:18.213315  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:18.713443  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:19.213955  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:19.712702  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:20.213468  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:20.713049  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:21.213298  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:21.713314  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:22.213832  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:22.712972  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:23.213462  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:23.712535  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:24.213443  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:24.712554  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:25.213566  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:25.713188  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:26.212845  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:26.713837  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:27.213918  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:27.713078  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:28.212648  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:28.712782  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:29.213872  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:29.713140  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:30.213174  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:30.714032  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:31.213649  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:31.713408  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:32.213341  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:32.714524  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:33.220074  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:33.714789  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:34.213794  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:34.713862  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:35.213231  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:35.713331  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:36.213857  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:36.714220  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:37.212969  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:37.713909  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:38.214162  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:38.713484  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:39.213258  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:39.712662  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:40.213844  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:40.713423  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:41.213249  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:41.713342  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:42.214566  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:42.713726  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:43.213601  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:43.713374  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:44.213588  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:44.713345  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:45.214430  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:45.713620  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:46.213508  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:46.714138  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:47.213283  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:47.713317  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:48.213884  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:48.713565  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:49.213639  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:49.712448  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:50.212889  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:50.713898  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:51.213430  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:51.713790  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:52.213868  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:52.714125  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:53.213447  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:53.713169  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:54.213721  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:54.713390  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:55.213662  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:55.713600  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:56.213797  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:56.713815  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:57.219137  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:57.713885  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:58.213583  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:58.713608  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:59.213813  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:08:59.713511  298896 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0815 17:09:00.237786  298896 kapi.go:107] duration metric: took 2m30.028260876s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0815 17:09:00.245340  298896 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-773218 cluster.
	I0815 17:09:00.247730  298896 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0815 17:09:00.249825  298896 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0815 17:09:00.253486  298896 out.go:177] * Enabled addons: volcano, storage-provisioner, cloud-spanner, nvidia-device-plugin, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0815 17:09:00.258382  298896 addons.go:510] duration metric: took 2m42.796472244s for enable addons: enabled=[volcano storage-provisioner cloud-spanner nvidia-device-plugin ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0815 17:09:00.258480  298896 start.go:246] waiting for cluster config update ...
	I0815 17:09:00.258526  298896 start.go:255] writing updated cluster config ...
	I0815 17:09:00.258898  298896 ssh_runner.go:195] Run: rm -f paused
	I0815 17:09:00.658267  298896 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0815 17:09:00.660584  298896 out.go:177] * Done! kubectl is now configured to use "addons-773218" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	e30353c42c7ca       e2d3313f65753       2 minutes ago       Exited              gadget                                   5                   9fae9ae92075b       gadget-gczvz
	f1a4300410e00       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   4bcfab2fe3b92       gcp-auth-89d5ffd79-t9tjj
	db8f0d3479383       8b46b1cd48760       4 minutes ago       Running             admission                                0                   3832a23ff70db       volcano-admission-77d7d48b68-5v8qt
	c8363d72310fa       24f8f979639f1       4 minutes ago       Running             controller                               0                   284c1cc698790       ingress-nginx-controller-7559cbf597-9vcfd
	3d4654ceb0ef2       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   28ad34840e126       csi-hostpathplugin-scjl2
	06bdc7b8ff92f       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   28ad34840e126       csi-hostpathplugin-scjl2
	78f6184b64ef7       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   28ad34840e126       csi-hostpathplugin-scjl2
	5f8de6880cc30       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   28ad34840e126       csi-hostpathplugin-scjl2
	4e248a697233b       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   28ad34840e126       csi-hostpathplugin-scjl2
	20074350e0baa       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   28ad34840e126       csi-hostpathplugin-scjl2
	ef55a04f0686d       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   d6f84b44d93b8       csi-hostpath-attacher-0
	381c4fc2ebef4       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   f1ea054e1e52a       csi-hostpath-resizer-0
	64b8bd6a7bccb       296b5f799fcd8       5 minutes ago       Exited              patch                                    1                   8c35b39b052a2       ingress-nginx-admission-patch-vrs6f
	6d1ea654ecaf2       296b5f799fcd8       5 minutes ago       Exited              create                                   0                   2bc3dbe9324bb       ingress-nginx-admission-create-96frn
	2ae538c53b26e       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   d22e93c7b2cca       volcano-controllers-56675bb4d5-vjr4m
	4187a28d413f9       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   78996861653cf       snapshot-controller-56fcc65765-dv7kx
	aedb3a7602db9       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   cddf17b2d7477       local-path-provisioner-86d989889c-pz428
	1e5b8d2edf5c7       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   9ac31aa6c919a       volcano-scheduler-576bc46687-x5rms
	7d225c000ead7       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   a5652fd48fa60       snapshot-controller-56fcc65765-dtlsr
	5116f845fed6c       95dccb4df54ab       5 minutes ago       Running             metrics-server                           0                   f2b9e96de6fb6       metrics-server-8988944d9-pbx6n
	59fc99e61baff       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   16927a64f5035       registry-proxy-2294p
	9dca092a6c140       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   de50190c07b79       nvidia-device-plugin-daemonset-jm8xf
	e7b4611d7609f       77bdba588b953       5 minutes ago       Running             yakd                                     0                   bc7e586292e38       yakd-dashboard-67d98fc6b-mmx8g
	18a2cc4ccbfae       6fed88f43b276       5 minutes ago       Running             registry                                 0                   aa16fe67c48bb       registry-6fb4cdfc84-t6znz
	c2ba128d972af       53af6e2c4c343       5 minutes ago       Running             cloud-spanner-emulator                   0                   64668574575cd       cloud-spanner-emulator-c4bc9b5f8-mjslr
	b17f47d0ecf35       2437cf7621777       5 minutes ago       Running             coredns                                  0                   4cc194fc9784e       coredns-6f6b679f8f-mxljc
	c1cf701bc8400       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   054db0303f68f       kube-ingress-dns-minikube
	cc02dfe8a8017       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   61a2e17775fb8       storage-provisioner
	dcc9bb146ef18       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   929513766cb08       kindnet-mcdhn
	6d4f34a4711ce       71d55d66fd4ee       6 minutes ago       Running             kube-proxy                               0                   e23ced622149f       kube-proxy-k8hj5
	6263b0801db2f       27e3830e14027       6 minutes ago       Running             etcd                                     0                   51004e43ed85b       etcd-addons-773218
	2f350ecc84863       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   93b194bbc1eaf       kube-apiserver-addons-773218
	a91ba678b421c       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   493ebcaa9f277       kube-scheduler-addons-773218
	a914f5a10e8b8       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   b10bb53ccff9c       kube-controller-manager-addons-773218
	
	
	==> containerd <==
	Aug 15 17:09:33 addons-773218 containerd[813]: time="2024-08-15T17:09:33.858765491Z" level=info msg="CreateContainer within sandbox \"9fae9ae92075b92a2b6e20a5f018037258426aed0f59f333c495719b12b3ca32\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 15 17:09:33 addons-773218 containerd[813]: time="2024-08-15T17:09:33.883488109Z" level=info msg="CreateContainer within sandbox \"9fae9ae92075b92a2b6e20a5f018037258426aed0f59f333c495719b12b3ca32\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99\""
	Aug 15 17:09:33 addons-773218 containerd[813]: time="2024-08-15T17:09:33.884182771Z" level=info msg="StartContainer for \"e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99\""
	Aug 15 17:09:33 addons-773218 containerd[813]: time="2024-08-15T17:09:33.942229663Z" level=info msg="StartContainer for \"e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99\" returns successfully"
	Aug 15 17:09:35 addons-773218 containerd[813]: time="2024-08-15T17:09:35.231608559Z" level=info msg="shim disconnected" id=e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99 namespace=k8s.io
	Aug 15 17:09:35 addons-773218 containerd[813]: time="2024-08-15T17:09:35.232083659Z" level=warning msg="cleaning up after shim disconnected" id=e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99 namespace=k8s.io
	Aug 15 17:09:35 addons-773218 containerd[813]: time="2024-08-15T17:09:35.232117104Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 15 17:09:35 addons-773218 containerd[813]: time="2024-08-15T17:09:35.489962450Z" level=error msg="ExecSync for \"e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Aug 15 17:09:35 addons-773218 containerd[813]: time="2024-08-15T17:09:35.489959513Z" level=error msg="ExecSync for \"e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Aug 15 17:09:35 addons-773218 containerd[813]: time="2024-08-15T17:09:35.490779532Z" level=error msg="ExecSync for \"e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Aug 15 17:09:35 addons-773218 containerd[813]: time="2024-08-15T17:09:35.490925444Z" level=error msg="ExecSync for \"e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Aug 15 17:09:35 addons-773218 containerd[813]: time="2024-08-15T17:09:35.491436229Z" level=error msg="ExecSync for \"e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Aug 15 17:09:35 addons-773218 containerd[813]: time="2024-08-15T17:09:35.491824536Z" level=error msg="ExecSync for \"e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99\" failed" error="failed to exec in container: container is in CONTAINER_EXITED state"
	Aug 15 17:09:35 addons-773218 containerd[813]: time="2024-08-15T17:09:35.905643074Z" level=info msg="RemoveContainer for \"7d49171fb81abaac2e135b0828b7971a4078842901e634519ee155e334a4277d\""
	Aug 15 17:09:35 addons-773218 containerd[813]: time="2024-08-15T17:09:35.919561005Z" level=info msg="RemoveContainer for \"7d49171fb81abaac2e135b0828b7971a4078842901e634519ee155e334a4277d\" returns successfully"
	Aug 15 17:10:12 addons-773218 containerd[813]: time="2024-08-15T17:10:12.815791948Z" level=info msg="RemoveContainer for \"281084c40cf1adcd0a34490f72c94412d7da21cbdbbdf0d143977fec8004c47b\""
	Aug 15 17:10:12 addons-773218 containerd[813]: time="2024-08-15T17:10:12.822202203Z" level=info msg="RemoveContainer for \"281084c40cf1adcd0a34490f72c94412d7da21cbdbbdf0d143977fec8004c47b\" returns successfully"
	Aug 15 17:10:12 addons-773218 containerd[813]: time="2024-08-15T17:10:12.824228973Z" level=info msg="StopPodSandbox for \"49c4047f085051587a4388299fd6e76f3e19bd88280cd998fafcdd78902ab5b7\""
	Aug 15 17:10:12 addons-773218 containerd[813]: time="2024-08-15T17:10:12.832127194Z" level=info msg="TearDown network for sandbox \"49c4047f085051587a4388299fd6e76f3e19bd88280cd998fafcdd78902ab5b7\" successfully"
	Aug 15 17:10:12 addons-773218 containerd[813]: time="2024-08-15T17:10:12.832173265Z" level=info msg="StopPodSandbox for \"49c4047f085051587a4388299fd6e76f3e19bd88280cd998fafcdd78902ab5b7\" returns successfully"
	Aug 15 17:10:12 addons-773218 containerd[813]: time="2024-08-15T17:10:12.832850360Z" level=info msg="RemovePodSandbox for \"49c4047f085051587a4388299fd6e76f3e19bd88280cd998fafcdd78902ab5b7\""
	Aug 15 17:10:12 addons-773218 containerd[813]: time="2024-08-15T17:10:12.832903521Z" level=info msg="Forcibly stopping sandbox \"49c4047f085051587a4388299fd6e76f3e19bd88280cd998fafcdd78902ab5b7\""
	Aug 15 17:10:12 addons-773218 containerd[813]: time="2024-08-15T17:10:12.840583953Z" level=info msg="TearDown network for sandbox \"49c4047f085051587a4388299fd6e76f3e19bd88280cd998fafcdd78902ab5b7\" successfully"
	Aug 15 17:10:12 addons-773218 containerd[813]: time="2024-08-15T17:10:12.847167607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49c4047f085051587a4388299fd6e76f3e19bd88280cd998fafcdd78902ab5b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 15 17:10:12 addons-773218 containerd[813]: time="2024-08-15T17:10:12.847296640Z" level=info msg="RemovePodSandbox \"49c4047f085051587a4388299fd6e76f3e19bd88280cd998fafcdd78902ab5b7\" returns successfully"
	
	
	==> coredns [b17f47d0ecf35ec6080bfa821a8211eecc01acfcd7192fb3e240e61deac12252] <==
	[INFO] 10.244.0.6:36015 - 9417 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00015776s
	[INFO] 10.244.0.6:52962 - 5880 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002223876s
	[INFO] 10.244.0.6:52962 - 58621 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002580749s
	[INFO] 10.244.0.6:51171 - 60633 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000175352s
	[INFO] 10.244.0.6:51171 - 64219 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000274166s
	[INFO] 10.244.0.6:39462 - 7970 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000105822s
	[INFO] 10.244.0.6:39462 - 60212 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000137001s
	[INFO] 10.244.0.6:44220 - 38399 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061572s
	[INFO] 10.244.0.6:44220 - 56801 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000069186s
	[INFO] 10.244.0.6:34934 - 15743 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062162s
	[INFO] 10.244.0.6:34934 - 11389 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000081436s
	[INFO] 10.244.0.6:52142 - 12498 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001886735s
	[INFO] 10.244.0.6:52142 - 38612 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001771544s
	[INFO] 10.244.0.6:48061 - 54551 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000114872s
	[INFO] 10.244.0.6:48061 - 61209 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00021417s
	[INFO] 10.244.0.24:38116 - 51492 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002009877s
	[INFO] 10.244.0.24:43694 - 59327 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002100863s
	[INFO] 10.244.0.24:54698 - 54886 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000121878s
	[INFO] 10.244.0.24:34228 - 18648 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000065083s
	[INFO] 10.244.0.24:59761 - 35214 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000091372s
	[INFO] 10.244.0.24:34476 - 36222 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000058708s
	[INFO] 10.244.0.24:56656 - 62117 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002353278s
	[INFO] 10.244.0.24:41994 - 40262 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00229407s
	[INFO] 10.244.0.24:47839 - 25751 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00085595s
	[INFO] 10.244.0.24:47784 - 7385 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000925274s
	
	
	==> describe nodes <==
	Name:               addons-773218
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-773218
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=addons-773218
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_06_13_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-773218
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-773218"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:06:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-773218
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:12:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:09:16 +0000   Thu, 15 Aug 2024 17:06:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:09:16 +0000   Thu, 15 Aug 2024 17:06:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:09:16 +0000   Thu, 15 Aug 2024 17:06:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:09:16 +0000   Thu, 15 Aug 2024 17:06:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-773218
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 2cfda270a64a444a8328f5efc9ba42b0
	  System UUID:                0f5b487e-9ff6-4d01-b5e8-796b1001c2fd
	  Boot ID:                    b8353367-6c23-495b-9e1b-e1ab13f1b466
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-c4bc9b5f8-mjslr       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  gadget                      gadget-gczvz                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  gcp-auth                    gcp-auth-89d5ffd79-t9tjj                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  ingress-nginx               ingress-nginx-controller-7559cbf597-9vcfd    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m53s
	  kube-system                 coredns-6f6b679f8f-mxljc                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m1s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpathplugin-scjl2                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 etcd-addons-773218                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m6s
	  kube-system                 kindnet-mcdhn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m2s
	  kube-system                 kube-apiserver-addons-773218                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-controller-manager-addons-773218        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-proxy-k8hj5                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-addons-773218                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 metrics-server-8988944d9-pbx6n               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m56s
	  kube-system                 nvidia-device-plugin-daemonset-jm8xf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-6fb4cdfc84-t6znz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-proxy-2294p                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-dtlsr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 snapshot-controller-56fcc65765-dv7kx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  local-path-storage          local-path-provisioner-86d989889c-pz428      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  volcano-system              volcano-admission-77d7d48b68-5v8qt           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-controllers-56675bb4d5-vjr4m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  volcano-system              volcano-scheduler-576bc46687-x5rms           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-mmx8g               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m                     kube-proxy       
	  Normal   NodeHasSufficientMemory  6m14s (x8 over 6m14s)  kubelet          Node addons-773218 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m14s (x7 over 6m14s)  kubelet          Node addons-773218 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m14s (x7 over 6m14s)  kubelet          Node addons-773218 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m7s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m7s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m7s                   kubelet          Node addons-773218 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s                   kubelet          Node addons-773218 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s                   kubelet          Node addons-773218 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m2s                   node-controller  Node addons-773218 event: Registered Node addons-773218 in Controller
	
	
	==> dmesg <==
	[Aug15 15:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014315] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.462505] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.048709] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002378] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.014356] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003895] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003063] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.666495] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.078915] kauditd_printk_skb: 36 callbacks suppressed
	[Aug15 16:08] hrtimer: interrupt took 36893779 ns
	[Aug15 16:09] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [6263b0801db2f9256c254bedfcd2cff20ec5b38903e0183911301bf8919fd9a3] <==
	{"level":"info","ts":"2024-08-15T17:06:06.357825Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-15T17:06:06.358052Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-15T17:06:06.358154Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-15T17:06:06.359422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-08-15T17:06:06.361282Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-08-15T17:06:07.201173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-15T17:06:07.201434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-15T17:06:07.201553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-15T17:06:07.201662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-15T17:06:07.201733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-15T17:06:07.201781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-15T17:06:07.201829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-15T17:06:07.213283Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T17:06:07.217343Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-773218 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-15T17:06:07.221282Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-15T17:06:07.221415Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-15T17:06:07.217510Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T17:06:07.217533Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-15T17:06:07.222814Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T17:06:07.224185Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-15T17:06:07.229649Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-15T17:06:07.233190Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-15T17:06:07.234885Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T17:06:07.234990Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-15T17:06:07.235125Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [f1a4300410e0008367d42b171040067b0fad7941c17fb519380dd5fdc8e76e81] <==
	2024/08/15 17:08:58 GCP Auth Webhook started!
	2024/08/15 17:09:17 Ready to marshal response ...
	2024/08/15 17:09:17 Ready to write response ...
	2024/08/15 17:09:18 Ready to marshal response ...
	2024/08/15 17:09:18 Ready to write response ...
	
	
	==> kernel <==
	 17:12:19 up  1:54,  0 users,  load average: 0.12, 0.56, 0.70
	Linux addons-773218 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [dcc9bb146ef18b1c8ea44bd32c66665395c05e8519acf8dc33306d8857eb6584] <==
	E0815 17:10:55.810023       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 17:11:01.134834       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:11:01.134875       1 main.go:299] handling current node
	W0815 17:11:10.920815       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 17:11:10.920850       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 17:11:11.134464       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:11:11.134502       1 main.go:299] handling current node
	I0815 17:11:21.134266       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:11:21.134308       1 main.go:299] handling current node
	I0815 17:11:31.134094       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:11:31.134127       1 main.go:299] handling current node
	W0815 17:11:34.243081       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:11:34.243116       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 17:11:41.134135       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:11:41.134230       1 main.go:299] handling current node
	W0815 17:11:43.834888       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 17:11:43.834932       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0815 17:11:44.379952       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 17:11:44.379987       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 17:11:51.134651       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:11:51.134694       1 main.go:299] handling current node
	I0815 17:12:01.134816       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:12:01.134907       1 main.go:299] handling current node
	I0815 17:12:11.134629       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0815 17:12:11.134664       1 main.go:299] handling current node
	
	
	==> kube-apiserver [2f350ecc84863e23ab1e886188f300701130ba70480d56a7647312d492f4e6e1] <==
	W0815 17:07:32.256491       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:33.188030       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.100.165:443: connect: connection refused
	E0815 17:07:33.188073       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.100.165:443: connect: connection refused" logger="UnhandledError"
	W0815 17:07:33.189770       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:33.253600       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.100.165:443: connect: connection refused
	E0815 17:07:33.253642       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.100.165:443: connect: connection refused" logger="UnhandledError"
	W0815 17:07:33.257534       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:33.308923       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:34.354552       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:35.374092       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:36.436004       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:37.499329       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:38.590982       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:39.655055       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:40.711956       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:41.767138       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:42.783177       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.109.248.53:443: connect: connection refused
	W0815 17:07:53.182332       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.100.165:443: connect: connection refused
	E0815 17:07:53.182373       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.100.165:443: connect: connection refused" logger="UnhandledError"
	W0815 17:08:33.198611       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.100.165:443: connect: connection refused
	E0815 17:08:33.198652       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.100.165:443: connect: connection refused" logger="UnhandledError"
	W0815 17:08:33.263531       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.100.165:443: connect: connection refused
	E0815 17:08:33.263575       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.100.165:443: connect: connection refused" logger="UnhandledError"
	I0815 17:09:17.176512       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0815 17:09:17.212216       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [a914f5a10e8b8102ff719531e36b5d4a9315c36d92246c3f87789c73a5938def] <==
	I0815 17:08:33.223970       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 17:08:33.225517       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 17:08:33.238376       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 17:08:33.272772       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 17:08:33.282132       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 17:08:33.291384       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 17:08:33.301923       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 17:08:34.736514       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 17:08:34.747082       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 17:08:35.844949       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 17:08:35.861593       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 17:08:36.851539       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 17:08:36.860297       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 17:08:36.867397       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0815 17:08:36.870684       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 17:08:36.879896       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 17:08:36.889960       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0815 17:08:59.822153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="10.564427ms"
	I0815 17:08:59.823261       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="49.428µs"
	I0815 17:09:06.033502       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0815 17:09:06.035802       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0815 17:09:06.086793       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0815 17:09:06.088113       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0815 17:09:16.461898       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-773218"
	I0815 17:09:16.893029       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [6d4f34a4711ce961cc3726e6a05d576178cdd093b0ec4924f614c39d6dc8a105] <==
	I0815 17:06:18.820254       1 server_linux.go:66] "Using iptables proxy"
	I0815 17:06:18.948920       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0815 17:06:18.948992       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0815 17:06:18.982866       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0815 17:06:18.983440       1 server_linux.go:169] "Using iptables Proxier"
	I0815 17:06:18.986115       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0815 17:06:18.986810       1 server.go:483] "Version info" version="v1.31.0"
	I0815 17:06:18.986837       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0815 17:06:18.993045       1 config.go:197] "Starting service config controller"
	I0815 17:06:18.993090       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0815 17:06:18.993108       1 config.go:104] "Starting endpoint slice config controller"
	I0815 17:06:18.993113       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0815 17:06:18.997123       1 config.go:326] "Starting node config controller"
	I0815 17:06:18.997326       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0815 17:06:19.093699       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0815 17:06:19.093765       1 shared_informer.go:320] Caches are synced for service config
	I0815 17:06:19.097999       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a91ba678b421cf8851d0adeaf2bebc8fa403e30c994737fd5b31c6aaaf6090b1] <==
	W0815 17:06:10.090280       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:06:10.090743       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:06:10.090914       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 17:06:10.090938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:06:10.091007       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 17:06:10.091026       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 17:06:10.090920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:06:10.091048       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:06:11.009697       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:06:11.010264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:06:11.029279       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 17:06:11.029498       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:06:11.044909       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 17:06:11.045212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0815 17:06:11.065224       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 17:06:11.065271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:06:11.079200       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 17:06:11.079319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:06:11.079204       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0815 17:06:11.079413       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0815 17:06:11.122877       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 17:06:11.122928       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0815 17:06:11.418571       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 17:06:11.418615       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0815 17:06:13.781016       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 15 17:10:12 addons-773218 kubelet[1485]: I0815 17:10:12.813970    1485 scope.go:117] "RemoveContainer" containerID="281084c40cf1adcd0a34490f72c94412d7da21cbdbbdf0d143977fec8004c47b"
	Aug 15 17:10:15 addons-773218 kubelet[1485]: I0815 17:10:15.716713    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-t6znz" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 17:10:19 addons-773218 kubelet[1485]: I0815 17:10:19.716349    1485 scope.go:117] "RemoveContainer" containerID="e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99"
	Aug 15 17:10:19 addons-773218 kubelet[1485]: E0815 17:10:19.716554    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-gczvz_gadget(dce162f9-e608-40f3-a4c8-19dceee07e8f)\"" pod="gadget/gadget-gczvz" podUID="dce162f9-e608-40f3-a4c8-19dceee07e8f"
	Aug 15 17:10:23 addons-773218 kubelet[1485]: I0815 17:10:23.716861    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jm8xf" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 17:10:33 addons-773218 kubelet[1485]: I0815 17:10:33.716631    1485 scope.go:117] "RemoveContainer" containerID="e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99"
	Aug 15 17:10:33 addons-773218 kubelet[1485]: E0815 17:10:33.716836    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-gczvz_gadget(dce162f9-e608-40f3-a4c8-19dceee07e8f)\"" pod="gadget/gadget-gczvz" podUID="dce162f9-e608-40f3-a4c8-19dceee07e8f"
	Aug 15 17:10:40 addons-773218 kubelet[1485]: I0815 17:10:40.716545    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2294p" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 17:10:48 addons-773218 kubelet[1485]: I0815 17:10:48.716742    1485 scope.go:117] "RemoveContainer" containerID="e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99"
	Aug 15 17:10:48 addons-773218 kubelet[1485]: E0815 17:10:48.716939    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-gczvz_gadget(dce162f9-e608-40f3-a4c8-19dceee07e8f)\"" pod="gadget/gadget-gczvz" podUID="dce162f9-e608-40f3-a4c8-19dceee07e8f"
	Aug 15 17:11:00 addons-773218 kubelet[1485]: I0815 17:11:00.717102    1485 scope.go:117] "RemoveContainer" containerID="e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99"
	Aug 15 17:11:00 addons-773218 kubelet[1485]: E0815 17:11:00.717345    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-gczvz_gadget(dce162f9-e608-40f3-a4c8-19dceee07e8f)\"" pod="gadget/gadget-gczvz" podUID="dce162f9-e608-40f3-a4c8-19dceee07e8f"
	Aug 15 17:11:14 addons-773218 kubelet[1485]: I0815 17:11:14.718187    1485 scope.go:117] "RemoveContainer" containerID="e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99"
	Aug 15 17:11:14 addons-773218 kubelet[1485]: E0815 17:11:14.718462    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-gczvz_gadget(dce162f9-e608-40f3-a4c8-19dceee07e8f)\"" pod="gadget/gadget-gczvz" podUID="dce162f9-e608-40f3-a4c8-19dceee07e8f"
	Aug 15 17:11:28 addons-773218 kubelet[1485]: I0815 17:11:28.716996    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-t6znz" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 17:11:29 addons-773218 kubelet[1485]: I0815 17:11:29.716836    1485 scope.go:117] "RemoveContainer" containerID="e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99"
	Aug 15 17:11:29 addons-773218 kubelet[1485]: E0815 17:11:29.717033    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-gczvz_gadget(dce162f9-e608-40f3-a4c8-19dceee07e8f)\"" pod="gadget/gadget-gczvz" podUID="dce162f9-e608-40f3-a4c8-19dceee07e8f"
	Aug 15 17:11:35 addons-773218 kubelet[1485]: I0815 17:11:35.717193    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-jm8xf" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 17:11:42 addons-773218 kubelet[1485]: I0815 17:11:42.718561    1485 scope.go:117] "RemoveContainer" containerID="e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99"
	Aug 15 17:11:42 addons-773218 kubelet[1485]: E0815 17:11:42.719102    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-gczvz_gadget(dce162f9-e608-40f3-a4c8-19dceee07e8f)\"" pod="gadget/gadget-gczvz" podUID="dce162f9-e608-40f3-a4c8-19dceee07e8f"
	Aug 15 17:11:54 addons-773218 kubelet[1485]: I0815 17:11:54.716760    1485 scope.go:117] "RemoveContainer" containerID="e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99"
	Aug 15 17:11:54 addons-773218 kubelet[1485]: E0815 17:11:54.716957    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-gczvz_gadget(dce162f9-e608-40f3-a4c8-19dceee07e8f)\"" pod="gadget/gadget-gczvz" podUID="dce162f9-e608-40f3-a4c8-19dceee07e8f"
	Aug 15 17:11:54 addons-773218 kubelet[1485]: I0815 17:11:54.717458    1485 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2294p" secret="" err="secret \"gcp-auth\" not found"
	Aug 15 17:12:08 addons-773218 kubelet[1485]: I0815 17:12:08.716901    1485 scope.go:117] "RemoveContainer" containerID="e30353c42c7ca56413549c8431b187264b632cc55cafb3f518b2305ec5d90e99"
	Aug 15 17:12:08 addons-773218 kubelet[1485]: E0815 17:12:08.717123    1485 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-gczvz_gadget(dce162f9-e608-40f3-a4c8-19dceee07e8f)\"" pod="gadget/gadget-gczvz" podUID="dce162f9-e608-40f3-a4c8-19dceee07e8f"
	
	
	==> storage-provisioner [cc02dfe8a801711ec614b8b6069ea0e2d0b189853e362889a1a3f9180b9d19b4] <==
	I0815 17:06:24.218568       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 17:06:24.253456       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 17:06:24.253511       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 17:06:24.264989       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 17:06:24.267377       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-773218_93953fda-e0b8-4caf-9352-06c8b5db3d81!
	I0815 17:06:24.275339       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b2ad42e8-fc04-4290-ad54-22b9b6969380", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-773218_93953fda-e0b8-4caf-9352-06c8b5db3d81 became leader
	I0815 17:06:24.368768       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-773218_93953fda-e0b8-4caf-9352-06c8b5db3d81!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-773218 -n addons-773218
helpers_test.go:261: (dbg) Run:  kubectl --context addons-773218 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-96frn ingress-nginx-admission-patch-vrs6f test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-773218 describe pod ingress-nginx-admission-create-96frn ingress-nginx-admission-patch-vrs6f test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-773218 describe pod ingress-nginx-admission-create-96frn ingress-nginx-admission-patch-vrs6f test-job-nginx-0: exit status 1 (83.175059ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-96frn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-vrs6f" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-773218 describe pod ingress-nginx-admission-create-96frn ingress-nginx-admission-patch-vrs6f test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (380.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-460705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-460705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m15.624640463s)

                                                
                                                
-- stdout --
	* [old-k8s-version-460705] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-460705" primary control-plane node in "old-k8s-version-460705" cluster
	* Pulling base image v0.0.44-1723650208-19443 ...
	* Restarting existing docker container for "old-k8s-version-460705" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-460705 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:52:23.399309  498968 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:52:23.399506  498968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:52:23.399539  498968 out.go:358] Setting ErrFile to fd 2...
	I0815 17:52:23.399557  498968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:52:23.399911  498968 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
	I0815 17:52:23.400392  498968 out.go:352] Setting JSON to false
	I0815 17:52:23.403349  498968 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9287,"bootTime":1723735057,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 17:52:23.403427  498968 start.go:139] virtualization:  
	I0815 17:52:23.406331  498968 out.go:177] * [old-k8s-version-460705] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 17:52:23.408965  498968 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:52:23.409195  498968 notify.go:220] Checking for updates...
	I0815 17:52:23.412742  498968 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:52:23.414625  498968 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	I0815 17:52:23.416407  498968 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	I0815 17:52:23.418316  498968 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 17:52:23.420323  498968 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:52:23.422843  498968 config.go:182] Loaded profile config "old-k8s-version-460705": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0815 17:52:23.426219  498968 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0815 17:52:23.428280  498968 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:52:23.463866  498968 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:52:23.463982  498968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:52:23.565849  498968 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-15 17:52:23.551881634 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:52:23.565956  498968 docker.go:307] overlay module found
	I0815 17:52:23.569041  498968 out.go:177] * Using the docker driver based on existing profile
	I0815 17:52:23.570662  498968 start.go:297] selected driver: docker
	I0815 17:52:23.570679  498968 start.go:901] validating driver "docker" against &{Name:old-k8s-version-460705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-460705 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:52:23.570788  498968 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:52:23.571392  498968 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:52:23.658904  498968 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-15 17:52:23.646849751 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:52:23.659257  498968 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:52:23.659278  498968 cni.go:84] Creating CNI manager for ""
	I0815 17:52:23.659286  498968 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 17:52:23.659324  498968 start.go:340] cluster config:
	{Name:old-k8s-version-460705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-460705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:52:23.663316  498968 out.go:177] * Starting "old-k8s-version-460705" primary control-plane node in "old-k8s-version-460705" cluster
	I0815 17:52:23.665583  498968 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0815 17:52:23.667503  498968 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:52:23.669903  498968 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0815 17:52:23.669963  498968 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0815 17:52:23.669987  498968 cache.go:56] Caching tarball of preloaded images
	I0815 17:52:23.670069  498968 preload.go:172] Found /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:52:23.670078  498968 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0815 17:52:23.670200  498968 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/config.json ...
	I0815 17:52:23.670411  498968 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	W0815 17:52:23.693952  498968 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 is of wrong architecture
	I0815 17:52:23.693971  498968 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:52:23.694039  498968 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:52:23.694056  498968 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 17:52:23.694060  498968 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 17:52:23.694068  498968 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:52:23.694073  498968 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 17:52:23.821941  498968 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 17:52:23.821979  498968 cache.go:194] Successfully downloaded all kic artifacts
	I0815 17:52:23.822019  498968 start.go:360] acquireMachinesLock for old-k8s-version-460705: {Name:mk914e2cbefaf3e1ddf3f04294e38779138b25f1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:52:23.822086  498968 start.go:364] duration metric: took 47.105µs to acquireMachinesLock for "old-k8s-version-460705"
	I0815 17:52:23.822107  498968 start.go:96] Skipping create...Using existing machine configuration
	I0815 17:52:23.822113  498968 fix.go:54] fixHost starting: 
	I0815 17:52:23.822394  498968 cli_runner.go:164] Run: docker container inspect old-k8s-version-460705 --format={{.State.Status}}
	I0815 17:52:23.869214  498968 fix.go:112] recreateIfNeeded on old-k8s-version-460705: state=Stopped err=<nil>
	W0815 17:52:23.869251  498968 fix.go:138] unexpected machine state, will restart: <nil>
	I0815 17:52:23.871657  498968 out.go:177] * Restarting existing docker container for "old-k8s-version-460705" ...
	I0815 17:52:23.873187  498968 cli_runner.go:164] Run: docker start old-k8s-version-460705
	I0815 17:52:24.234074  498968 cli_runner.go:164] Run: docker container inspect old-k8s-version-460705 --format={{.State.Status}}
	I0815 17:52:24.255678  498968 kic.go:430] container "old-k8s-version-460705" state is running.
	I0815 17:52:24.256288  498968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-460705
	I0815 17:52:24.279622  498968 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/config.json ...
	I0815 17:52:24.279868  498968 machine.go:93] provisionDockerMachine start ...
	I0815 17:52:24.279938  498968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460705
	I0815 17:52:24.299030  498968 main.go:141] libmachine: Using SSH client type: native
	I0815 17:52:24.299293  498968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0815 17:52:24.299302  498968 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 17:52:24.299917  498968 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37642->127.0.0.1:33433: read: connection reset by peer
	I0815 17:52:27.437035  498968 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-460705
	
	I0815 17:52:27.437056  498968 ubuntu.go:169] provisioning hostname "old-k8s-version-460705"
	I0815 17:52:27.437118  498968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460705
	I0815 17:52:27.458307  498968 main.go:141] libmachine: Using SSH client type: native
	I0815 17:52:27.458558  498968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0815 17:52:27.458569  498968 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-460705 && echo "old-k8s-version-460705" | sudo tee /etc/hostname
	I0815 17:52:27.614612  498968 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-460705
	
	I0815 17:52:27.614687  498968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460705
	I0815 17:52:27.638674  498968 main.go:141] libmachine: Using SSH client type: native
	I0815 17:52:27.638922  498968 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0815 17:52:27.638944  498968 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-460705' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-460705/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-460705' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:52:27.776935  498968 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:52:27.777002  498968 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19450-292730/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-292730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-292730/.minikube}
	I0815 17:52:27.777047  498968 ubuntu.go:177] setting up certificates
	I0815 17:52:27.777083  498968 provision.go:84] configureAuth start
	I0815 17:52:27.777193  498968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-460705
	I0815 17:52:27.803968  498968 provision.go:143] copyHostCerts
	I0815 17:52:27.804032  498968 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-292730/.minikube/ca.pem, removing ...
	I0815 17:52:27.804041  498968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-292730/.minikube/ca.pem
	I0815 17:52:27.804119  498968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-292730/.minikube/ca.pem (1082 bytes)
	I0815 17:52:27.804223  498968 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-292730/.minikube/cert.pem, removing ...
	I0815 17:52:27.804228  498968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-292730/.minikube/cert.pem
	I0815 17:52:27.804256  498968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-292730/.minikube/cert.pem (1123 bytes)
	I0815 17:52:27.804316  498968 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-292730/.minikube/key.pem, removing ...
	I0815 17:52:27.804320  498968 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-292730/.minikube/key.pem
	I0815 17:52:27.804343  498968 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-292730/.minikube/key.pem (1675 bytes)
	I0815 17:52:27.804397  498968 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-292730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-460705 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-460705]
	I0815 17:52:28.165818  498968 provision.go:177] copyRemoteCerts
	I0815 17:52:28.165928  498968 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:52:28.165999  498968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460705
	I0815 17:52:28.186496  498968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/old-k8s-version-460705/id_rsa Username:docker}
	I0815 17:52:28.286154  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:52:28.322205  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0815 17:52:28.349539  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0815 17:52:28.378763  498968 provision.go:87] duration metric: took 601.647505ms to configureAuth
	I0815 17:52:28.378793  498968 ubuntu.go:193] setting minikube options for container-runtime
	I0815 17:52:28.378998  498968 config.go:182] Loaded profile config "old-k8s-version-460705": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0815 17:52:28.379011  498968 machine.go:96] duration metric: took 4.099135329s to provisionDockerMachine
	I0815 17:52:28.379020  498968 start.go:293] postStartSetup for "old-k8s-version-460705" (driver="docker")
	I0815 17:52:28.379033  498968 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:52:28.379085  498968 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:52:28.379134  498968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460705
	I0815 17:52:28.403275  498968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/old-k8s-version-460705/id_rsa Username:docker}
	I0815 17:52:28.502399  498968 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:52:28.506132  498968 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 17:52:28.506164  498968 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 17:52:28.506175  498968 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 17:52:28.506182  498968 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 17:52:28.506192  498968 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-292730/.minikube/addons for local assets ...
	I0815 17:52:28.506245  498968 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-292730/.minikube/files for local assets ...
	I0815 17:52:28.506327  498968 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-292730/.minikube/files/etc/ssl/certs/2981302.pem -> 2981302.pem in /etc/ssl/certs
	I0815 17:52:28.506426  498968 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:52:28.515882  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/files/etc/ssl/certs/2981302.pem --> /etc/ssl/certs/2981302.pem (1708 bytes)
	I0815 17:52:28.541243  498968 start.go:296] duration metric: took 162.20441ms for postStartSetup
	I0815 17:52:28.541420  498968 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:52:28.541497  498968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460705
	I0815 17:52:28.558692  498968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/old-k8s-version-460705/id_rsa Username:docker}
	I0815 17:52:28.650344  498968 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 17:52:28.655531  498968 fix.go:56] duration metric: took 4.833411095s for fixHost
	I0815 17:52:28.655560  498968 start.go:83] releasing machines lock for "old-k8s-version-460705", held for 4.833463721s
	I0815 17:52:28.655628  498968 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-460705
	I0815 17:52:28.671873  498968 ssh_runner.go:195] Run: cat /version.json
	I0815 17:52:28.671939  498968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460705
	I0815 17:52:28.672214  498968 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:52:28.672272  498968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460705
	I0815 17:52:28.692311  498968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/old-k8s-version-460705/id_rsa Username:docker}
	I0815 17:52:28.705540  498968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/old-k8s-version-460705/id_rsa Username:docker}
	I0815 17:52:28.785274  498968 ssh_runner.go:195] Run: systemctl --version
	I0815 17:52:28.924411  498968 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 17:52:28.929016  498968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0815 17:52:28.947612  498968 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0815 17:52:28.947757  498968 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:52:28.958166  498968 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0815 17:52:28.958236  498968 start.go:495] detecting cgroup driver to use...
	I0815 17:52:28.958296  498968 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 17:52:28.958387  498968 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 17:52:28.973818  498968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 17:52:28.987249  498968 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:52:28.987359  498968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:52:29.001550  498968 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:52:29.015947  498968 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:52:29.132911  498968 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:52:29.241783  498968 docker.go:233] disabling docker service ...
	I0815 17:52:29.241913  498968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:52:29.257227  498968 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:52:29.270345  498968 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:52:29.374667  498968 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:52:29.484514  498968 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:52:29.498778  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:52:29.516953  498968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0815 17:52:29.527618  498968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 17:52:29.537886  498968 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 17:52:29.538000  498968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 17:52:29.548552  498968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 17:52:29.558973  498968 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 17:52:29.569007  498968 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 17:52:29.579350  498968 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:52:29.589151  498968 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 17:52:29.599198  498968 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:52:29.608870  498968 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:52:29.618018  498968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:52:29.725957  498968 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 17:52:29.969121  498968 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0815 17:52:29.969256  498968 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0815 17:52:29.977777  498968 start.go:563] Will wait 60s for crictl version
	I0815 17:52:29.977896  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:52:29.989744  498968 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:52:30.068291  498968 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0815 17:52:30.068384  498968 ssh_runner.go:195] Run: containerd --version
	I0815 17:52:30.098694  498968 ssh_runner.go:195] Run: containerd --version
	I0815 17:52:30.131191  498968 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	I0815 17:52:30.133472  498968 cli_runner.go:164] Run: docker network inspect old-k8s-version-460705 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:52:30.152587  498968 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0815 17:52:30.157262  498968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:52:30.169895  498968 kubeadm.go:883] updating cluster {Name:old-k8s-version-460705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-460705 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 17:52:30.170027  498968 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0815 17:52:30.170090  498968 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:52:30.220345  498968 containerd.go:627] all images are preloaded for containerd runtime.
	I0815 17:52:30.220368  498968 containerd.go:534] Images already preloaded, skipping extraction
	I0815 17:52:30.220435  498968 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:52:30.262909  498968 containerd.go:627] all images are preloaded for containerd runtime.
	I0815 17:52:30.262977  498968 cache_images.go:84] Images are preloaded, skipping loading
	I0815 17:52:30.262998  498968 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0815 17:52:30.263184  498968 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-460705 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-460705 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:52:30.263307  498968 ssh_runner.go:195] Run: sudo crictl info
	I0815 17:52:30.311923  498968 cni.go:84] Creating CNI manager for ""
	I0815 17:52:30.312000  498968 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 17:52:30.312028  498968 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 17:52:30.312088  498968 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-460705 NodeName:old-k8s-version-460705 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0815 17:52:30.312308  498968 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-460705"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 17:52:30.312421  498968 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0815 17:52:30.335889  498968 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:52:30.336005  498968 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 17:52:30.344480  498968 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0815 17:52:30.362759  498968 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:52:30.383898  498968 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0815 17:52:30.403237  498968 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0815 17:52:30.407110  498968 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:52:30.418319  498968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:52:30.521546  498968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:52:30.539918  498968 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705 for IP: 192.168.76.2
	I0815 17:52:30.539992  498968 certs.go:194] generating shared ca certs ...
	I0815 17:52:30.540021  498968 certs.go:226] acquiring lock for ca certs: {Name:mkb4a15757b6ba038567496d15807eaae760a8a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:52:30.540214  498968 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-292730/.minikube/ca.key
	I0815 17:52:30.540300  498968 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.key
	I0815 17:52:30.540327  498968 certs.go:256] generating profile certs ...
	I0815 17:52:30.540488  498968 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.key
	I0815 17:52:30.540614  498968 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/apiserver.key.76bcfd49
	I0815 17:52:30.540697  498968 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/proxy-client.key
	I0815 17:52:30.540868  498968 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/298130.pem (1338 bytes)
	W0815 17:52:30.540930  498968 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-292730/.minikube/certs/298130_empty.pem, impossibly tiny 0 bytes
	I0815 17:52:30.540956  498968 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:52:30.541012  498968 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:52:30.541074  498968 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:52:30.541125  498968 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/key.pem (1675 bytes)
	I0815 17:52:30.541227  498968 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/files/etc/ssl/certs/2981302.pem (1708 bytes)
	I0815 17:52:30.542110  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:52:30.612996  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 17:52:30.675393  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:52:30.737328  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 17:52:30.798593  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0815 17:52:30.824502  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0815 17:52:30.852110  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:52:30.877671  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0815 17:52:30.904139  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/certs/298130.pem --> /usr/share/ca-certificates/298130.pem (1338 bytes)
	I0815 17:52:30.930152  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/files/etc/ssl/certs/2981302.pem --> /usr/share/ca-certificates/2981302.pem (1708 bytes)
	I0815 17:52:30.956745  498968 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:52:30.983269  498968 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 17:52:31.002679  498968 ssh_runner.go:195] Run: openssl version
	I0815 17:52:31.010284  498968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298130.pem && ln -fs /usr/share/ca-certificates/298130.pem /etc/ssl/certs/298130.pem"
	I0815 17:52:31.021154  498968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298130.pem
	I0815 17:52:31.025385  498968 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:15 /usr/share/ca-certificates/298130.pem
	I0815 17:52:31.025514  498968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298130.pem
	I0815 17:52:31.033098  498968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298130.pem /etc/ssl/certs/51391683.0"
	I0815 17:52:31.042910  498968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2981302.pem && ln -fs /usr/share/ca-certificates/2981302.pem /etc/ssl/certs/2981302.pem"
	I0815 17:52:31.053070  498968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2981302.pem
	I0815 17:52:31.057310  498968 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:15 /usr/share/ca-certificates/2981302.pem
	I0815 17:52:31.057445  498968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2981302.pem
	I0815 17:52:31.064891  498968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2981302.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:52:31.074897  498968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:52:31.085602  498968 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:52:31.089618  498968 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:52:31.089761  498968 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:52:31.097122  498968 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:52:31.107896  498968 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:52:31.112433  498968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0815 17:52:31.120082  498968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0815 17:52:31.127937  498968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0815 17:52:31.135441  498968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0815 17:52:31.142869  498968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0815 17:52:31.150229  498968 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0815 17:52:31.157570  498968 kubeadm.go:392] StartCluster: {Name:old-k8s-version-460705 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-460705 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:52:31.157730  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0815 17:52:31.157826  498968 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 17:52:31.220114  498968 cri.go:89] found id: "a44575f12412c535e8e8f9561223247881ddc6bdbc1c6623af652d681b631e61"
	I0815 17:52:31.220141  498968 cri.go:89] found id: "bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047"
	I0815 17:52:31.220147  498968 cri.go:89] found id: "3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6"
	I0815 17:52:31.220151  498968 cri.go:89] found id: "83c7e904103c04984bfd04d8148a428ffa15e45f5e5c1ac820b40b32fbe96bcc"
	I0815 17:52:31.220155  498968 cri.go:89] found id: "755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef"
	I0815 17:52:31.220159  498968 cri.go:89] found id: "27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666"
	I0815 17:52:31.220162  498968 cri.go:89] found id: "66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438"
	I0815 17:52:31.220166  498968 cri.go:89] found id: "cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f"
	I0815 17:52:31.220169  498968 cri.go:89] found id: "5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197"
	I0815 17:52:31.220176  498968 cri.go:89] found id: ""
	I0815 17:52:31.220226  498968 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0815 17:52:31.233001  498968 cri.go:116] JSON = null
	W0815 17:52:31.233050  498968 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 9
	I0815 17:52:31.233140  498968 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 17:52:31.242927  498968 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0815 17:52:31.242948  498968 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0815 17:52:31.242997  498968 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0815 17:52:31.251882  498968 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0815 17:52:31.252316  498968 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-460705" does not appear in /home/jenkins/minikube-integration/19450-292730/kubeconfig
	I0815 17:52:31.252421  498968 kubeconfig.go:62] /home/jenkins/minikube-integration/19450-292730/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-460705" cluster setting kubeconfig missing "old-k8s-version-460705" context setting]
	I0815 17:52:31.252712  498968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/kubeconfig: {Name:mkdfbda4e28d6fa44e652363c57a1f0d4206cf57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:52:31.253946  498968 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0815 17:52:31.263489  498968 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0815 17:52:31.263546  498968 kubeadm.go:597] duration metric: took 20.589914ms to restartPrimaryControlPlane
	I0815 17:52:31.263563  498968 kubeadm.go:394] duration metric: took 106.001803ms to StartCluster
	I0815 17:52:31.263579  498968 settings.go:142] acquiring lock: {Name:mk45ce81b4bf65b6cbcfdad87d2da5b14c3b063e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:52:31.263653  498968 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19450-292730/kubeconfig
	I0815 17:52:31.264441  498968 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/kubeconfig: {Name:mkdfbda4e28d6fa44e652363c57a1f0d4206cf57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:52:31.264752  498968 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0815 17:52:31.265192  498968 config.go:182] Loaded profile config "old-k8s-version-460705": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0815 17:52:31.265158  498968 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0815 17:52:31.265240  498968 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-460705"
	I0815 17:52:31.265246  498968 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-460705"
	I0815 17:52:31.265263  498968 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-460705"
	I0815 17:52:31.265265  498968 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-460705"
	W0815 17:52:31.265274  498968 addons.go:243] addon metrics-server should already be in state true
	I0815 17:52:31.265301  498968 host.go:66] Checking if "old-k8s-version-460705" exists ...
	I0815 17:52:31.265712  498968 cli_runner.go:164] Run: docker container inspect old-k8s-version-460705 --format={{.State.Status}}
	I0815 17:52:31.265718  498968 cli_runner.go:164] Run: docker container inspect old-k8s-version-460705 --format={{.State.Status}}
	I0815 17:52:31.266185  498968 addons.go:69] Setting dashboard=true in profile "old-k8s-version-460705"
	I0815 17:52:31.266240  498968 addons.go:234] Setting addon dashboard=true in "old-k8s-version-460705"
	W0815 17:52:31.266263  498968 addons.go:243] addon dashboard should already be in state true
	I0815 17:52:31.266298  498968 host.go:66] Checking if "old-k8s-version-460705" exists ...
	I0815 17:52:31.266789  498968 cli_runner.go:164] Run: docker container inspect old-k8s-version-460705 --format={{.State.Status}}
	I0815 17:52:31.265240  498968 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-460705"
	I0815 17:52:31.267742  498968 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-460705"
	W0815 17:52:31.267769  498968 addons.go:243] addon storage-provisioner should already be in state true
	I0815 17:52:31.267841  498968 host.go:66] Checking if "old-k8s-version-460705" exists ...
	I0815 17:52:31.268353  498968 cli_runner.go:164] Run: docker container inspect old-k8s-version-460705 --format={{.State.Status}}
	I0815 17:52:31.268767  498968 out.go:177] * Verifying Kubernetes components...
	I0815 17:52:31.271636  498968 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:52:31.342975  498968 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0815 17:52:31.344905  498968 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0815 17:52:31.344935  498968 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0815 17:52:31.345004  498968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460705
	I0815 17:52:31.345157  498968 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0815 17:52:31.347659  498968 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0815 17:52:31.348831  498968 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-460705"
	W0815 17:52:31.348854  498968 addons.go:243] addon default-storageclass should already be in state true
	I0815 17:52:31.348880  498968 host.go:66] Checking if "old-k8s-version-460705" exists ...
	I0815 17:52:31.349314  498968 cli_runner.go:164] Run: docker container inspect old-k8s-version-460705 --format={{.State.Status}}
	I0815 17:52:31.353081  498968 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:52:31.353104  498968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0815 17:52:31.353198  498968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460705
	I0815 17:52:31.354846  498968 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0815 17:52:31.356663  498968 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0815 17:52:31.356687  498968 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0815 17:52:31.356768  498968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460705
	I0815 17:52:31.416480  498968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/old-k8s-version-460705/id_rsa Username:docker}
	I0815 17:52:31.416798  498968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/old-k8s-version-460705/id_rsa Username:docker}
	I0815 17:52:31.424613  498968 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0815 17:52:31.424633  498968 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0815 17:52:31.424698  498968 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-460705
	I0815 17:52:31.428962  498968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/old-k8s-version-460705/id_rsa Username:docker}
	I0815 17:52:31.455842  498968 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/old-k8s-version-460705/id_rsa Username:docker}
	I0815 17:52:31.545068  498968 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:52:31.587189  498968 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-460705" to be "Ready" ...
	I0815 17:52:31.640561  498968 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0815 17:52:31.640691  498968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0815 17:52:31.683403  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:52:31.693898  498968 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0815 17:52:31.693946  498968 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0815 17:52:31.721062  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:52:31.725792  498968 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0815 17:52:31.725858  498968 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0815 17:52:31.749341  498968 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:52:31.749361  498968 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0815 17:52:31.793652  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:52:31.893459  498968 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0815 17:52:31.893535  498968 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0815 17:52:31.934064  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:31.934159  498968 retry.go:31] will retry after 198.023222ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:31.985921  498968 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0815 17:52:31.985993  498968 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0815 17:52:32.013554  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.013646  498968 retry.go:31] will retry after 173.213787ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.033878  498968 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0815 17:52:32.033948  498968 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0815 17:52:32.050490  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.050588  498968 retry.go:31] will retry after 213.286146ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.064845  498968 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0815 17:52:32.064926  498968 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0815 17:52:32.085145  498968 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0815 17:52:32.085218  498968 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0815 17:52:32.105428  498968 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0815 17:52:32.105498  498968 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0815 17:52:32.124472  498968 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0815 17:52:32.124552  498968 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0815 17:52:32.132626  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:52:32.149902  498968 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 17:52:32.149974  498968 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0815 17:52:32.187052  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:52:32.218241  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 17:52:32.264541  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0815 17:52:32.282159  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.282241  498968 retry.go:31] will retry after 287.763952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 17:52:32.463162  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.463253  498968 retry.go:31] will retry after 231.507929ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 17:52:32.505951  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.505984  498968 retry.go:31] will retry after 218.655711ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 17:52:32.506070  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.506106  498968 retry.go:31] will retry after 501.594947ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.571113  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0815 17:52:32.643840  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.643873  498968 retry.go:31] will retry after 290.022683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.695049  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:52:32.725478  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0815 17:52:32.779796  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.779874  498968 retry.go:31] will retry after 421.567612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 17:52:32.824799  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.824833  498968 retry.go:31] will retry after 204.659039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:32.934084  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:52:33.008489  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:52:33.030153  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0815 17:52:33.120399  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:33.120479  498968 retry.go:31] will retry after 521.549269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:33.201625  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0815 17:52:33.216588  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:33.216623  498968 retry.go:31] will retry after 440.122846ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 17:52:33.256785  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:33.256821  498968 retry.go:31] will retry after 677.26976ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 17:52:33.333730  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:33.333762  498968 retry.go:31] will retry after 854.159612ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:33.587796  498968 node_ready.go:53] error getting node "old-k8s-version-460705": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-460705": dial tcp 192.168.76.2:8443: connect: connection refused
	I0815 17:52:33.643116  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:52:33.657479  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0815 17:52:33.784094  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:33.784127  498968 retry.go:31] will retry after 1.100908156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 17:52:33.853060  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:33.853094  498968 retry.go:31] will retry after 738.842391ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:33.934343  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0815 17:52:34.091505  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:34.091543  498968 retry.go:31] will retry after 1.098874377s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:34.188850  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0815 17:52:34.327296  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:34.327329  498968 retry.go:31] will retry after 1.020444477s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:34.592351  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0815 17:52:34.708979  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:34.709024  498968 retry.go:31] will retry after 1.242308586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:34.885444  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0815 17:52:35.004411  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:35.004448  498968 retry.go:31] will retry after 2.660695335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:35.191387  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0815 17:52:35.312898  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:35.312931  498968 retry.go:31] will retry after 1.64919851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:35.348191  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0815 17:52:35.487633  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:35.487673  498968 retry.go:31] will retry after 2.239186198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:35.588189  498968 node_ready.go:53] error getting node "old-k8s-version-460705": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-460705": dial tcp 192.168.76.2:8443: connect: connection refused
	I0815 17:52:35.951762  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0815 17:52:36.048487  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:36.048522  498968 retry.go:31] will retry after 2.369686908s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:36.962482  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0815 17:52:37.120950  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:37.120986  498968 retry.go:31] will retry after 1.412761821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:37.588680  498968 node_ready.go:53] error getting node "old-k8s-version-460705": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-460705": dial tcp 192.168.76.2:8443: connect: connection refused
	I0815 17:52:37.665974  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0815 17:52:37.727301  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0815 17:52:37.846603  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:37.846637  498968 retry.go:31] will retry after 2.906921751s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 17:52:37.914050  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:37.914085  498968 retry.go:31] will retry after 2.758110996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:38.418548  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:52:38.534035  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0815 17:52:38.537243  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:38.537319  498968 retry.go:31] will retry after 1.567163723s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0815 17:52:38.672485  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:38.672515  498968 retry.go:31] will retry after 1.530670052s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:40.088095  498968 node_ready.go:53] error getting node "old-k8s-version-460705": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-460705": dial tcp 192.168.76.2:8443: connect: connection refused
	I0815 17:52:40.105446  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:52:40.203972  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0815 17:52:40.673176  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0815 17:52:40.753729  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0815 17:52:40.873870  498968 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:40.873901  498968 retry.go:31] will retry after 3.235382218s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0815 17:52:44.109611  498968 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0815 17:52:50.550628  498968 node_ready.go:49] node "old-k8s-version-460705" has status "Ready":"True"
	I0815 17:52:50.550664  498968 node_ready.go:38] duration metric: took 18.963402613s for node "old-k8s-version-460705" to be "Ready" ...
	I0815 17:52:50.550676  498968 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:52:50.764554  498968 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-2w5d2" in "kube-system" namespace to be "Ready" ...
	I0815 17:52:50.789737  498968 pod_ready.go:93] pod "coredns-74ff55c5b-2w5d2" in "kube-system" namespace has status "Ready":"True"
	I0815 17:52:50.789771  498968 pod_ready.go:82] duration metric: took 25.182121ms for pod "coredns-74ff55c5b-2w5d2" in "kube-system" namespace to be "Ready" ...
	I0815 17:52:50.789783  498968 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-460705" in "kube-system" namespace to be "Ready" ...
	I0815 17:52:50.855915  498968 pod_ready.go:93] pod "etcd-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"True"
	I0815 17:52:50.855949  498968 pod_ready.go:82] duration metric: took 66.157609ms for pod "etcd-old-k8s-version-460705" in "kube-system" namespace to be "Ready" ...
	I0815 17:52:50.855971  498968 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-460705" in "kube-system" namespace to be "Ready" ...
	I0815 17:52:52.302966  498968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.09890912s)
	I0815 17:52:52.303175  498968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.629969712s)
	I0815 17:52:52.303400  498968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.549638759s)
	I0815 17:52:52.303479  498968 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.193840006s)
	I0815 17:52:52.303495  498968 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-460705"
	I0815 17:52:52.305889  498968 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-460705 addons enable metrics-server
	
	I0815 17:52:52.312482  498968 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0815 17:52:52.315210  498968 addons.go:510] duration metric: took 21.050050452s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0815 17:52:52.863550  498968 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:52:55.361954  498968 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:52:57.862274  498968 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:52:58.864516  498968 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"True"
	I0815 17:52:58.864545  498968 pod_ready.go:82] duration metric: took 8.008562057s for pod "kube-apiserver-old-k8s-version-460705" in "kube-system" namespace to be "Ready" ...
	I0815 17:52:58.866844  498968 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace to be "Ready" ...
	I0815 17:53:00.874662  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:03.377373  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:05.874973  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:08.372832  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:10.375106  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:12.378644  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:14.882666  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:17.372914  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:19.373569  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:21.373827  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:23.873935  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:26.373634  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:28.873641  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:30.874136  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:32.874568  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:34.874727  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:36.874881  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:39.378205  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:41.873541  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:43.874329  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:46.373352  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:48.873990  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:50.876126  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:53.373049  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:55.873391  498968 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:53:57.372610  498968 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"True"
	I0815 17:53:57.372634  498968 pod_ready.go:82] duration metric: took 58.50574987s for pod "kube-controller-manager-old-k8s-version-460705" in "kube-system" namespace to be "Ready" ...
	I0815 17:53:57.372646  498968 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-q8bzk" in "kube-system" namespace to be "Ready" ...
	I0815 17:53:57.378076  498968 pod_ready.go:93] pod "kube-proxy-q8bzk" in "kube-system" namespace has status "Ready":"True"
	I0815 17:53:57.378145  498968 pod_ready.go:82] duration metric: took 5.489664ms for pod "kube-proxy-q8bzk" in "kube-system" namespace to be "Ready" ...
	I0815 17:53:57.378162  498968 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-460705" in "kube-system" namespace to be "Ready" ...
	I0815 17:53:59.384602  498968 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:01.384964  498968 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:03.385357  498968 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:05.885475  498968 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:08.390311  498968 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:10.921671  498968 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:13.384564  498968 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:14.384653  498968 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-460705" in "kube-system" namespace has status "Ready":"True"
	I0815 17:54:14.384682  498968 pod_ready.go:82] duration metric: took 17.006510388s for pod "kube-scheduler-old-k8s-version-460705" in "kube-system" namespace to be "Ready" ...
	I0815 17:54:14.384695  498968 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace to be "Ready" ...
	I0815 17:54:16.390776  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:18.892066  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:21.390184  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:23.390668  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:25.390769  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:27.391342  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:29.891188  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:31.892085  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:34.393232  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:36.891648  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:38.891793  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:41.390786  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:43.391535  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:45.892289  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:48.390803  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:50.393377  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:52.891525  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:55.390961  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:54:57.891906  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:00.398254  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:02.891691  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:05.392511  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:07.891343  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:09.891502  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:11.892186  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:14.391717  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:16.890021  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:18.890583  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:20.891222  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:23.397356  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:25.892916  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:28.390988  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:30.391275  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:32.891295  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:34.893091  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:37.400778  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:39.890514  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:41.891540  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:44.390695  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:46.391129  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:48.391396  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:50.894883  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:53.390365  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:55.391747  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:55:57.892165  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:00.419323  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:02.891456  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:05.390760  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:07.891238  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:10.390977  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:12.891514  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:14.891775  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:17.390056  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:19.391451  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:21.393627  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:23.890222  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:25.893566  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:28.391639  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:30.891211  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:33.391771  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:35.892467  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:38.391018  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:40.391193  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:42.391727  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:44.890872  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:46.892456  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:49.391523  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:51.890648  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:53.890783  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:55.892927  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:56:58.391010  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:00.391748  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:02.891401  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:05.390676  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:07.390822  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:09.390964  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:11.391658  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:13.891433  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:15.893169  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:18.390772  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:20.891514  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:23.391418  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:25.891951  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:28.390745  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:30.391398  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:32.391908  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:34.891775  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:37.391455  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:39.891349  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:42.392709  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:44.890944  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:46.892971  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:49.390529  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:51.392613  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:53.891493  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:55.891807  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:57:58.391032  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:58:00.393290  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:58:02.891628  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:58:05.391467  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:58:07.892762  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:58:09.895933  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:58:12.394175  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:58:14.392131  498968 pod_ready.go:82] duration metric: took 4m0.007420447s for pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace to be "Ready" ...
	E0815 17:58:14.392156  498968 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 17:58:14.392164  498968 pod_ready.go:39] duration metric: took 5m23.841477652s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:58:14.392180  498968 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:58:14.392210  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0815 17:58:14.392263  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 17:58:14.466245  498968 cri.go:89] found id: "898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6"
	I0815 17:58:14.466265  498968 cri.go:89] found id: "66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438"
	I0815 17:58:14.466270  498968 cri.go:89] found id: ""
	I0815 17:58:14.466277  498968 logs.go:276] 2 containers: [898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6 66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438]
	I0815 17:58:14.466332  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.472712  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.476671  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0815 17:58:14.476743  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 17:58:14.569437  498968 cri.go:89] found id: "1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6"
	I0815 17:58:14.569457  498968 cri.go:89] found id: "5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197"
	I0815 17:58:14.569461  498968 cri.go:89] found id: ""
	I0815 17:58:14.569468  498968 logs.go:276] 2 containers: [1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6 5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197]
	I0815 17:58:14.569528  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.573756  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.577642  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0815 17:58:14.577709  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 17:58:14.627733  498968 cri.go:89] found id: "ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568"
	I0815 17:58:14.627752  498968 cri.go:89] found id: "bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047"
	I0815 17:58:14.627756  498968 cri.go:89] found id: ""
	I0815 17:58:14.627764  498968 logs.go:276] 2 containers: [ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568 bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047]
	I0815 17:58:14.627816  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.631979  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.635941  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0815 17:58:14.636058  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 17:58:14.688370  498968 cri.go:89] found id: "3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065"
	I0815 17:58:14.688448  498968 cri.go:89] found id: "27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666"
	I0815 17:58:14.688467  498968 cri.go:89] found id: ""
	I0815 17:58:14.688511  498968 logs.go:276] 2 containers: [3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065 27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666]
	I0815 17:58:14.688603  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.692767  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.696849  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0815 17:58:14.696924  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 17:58:14.740801  498968 cri.go:89] found id: "ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72"
	I0815 17:58:14.740823  498968 cri.go:89] found id: "755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef"
	I0815 17:58:14.740828  498968 cri.go:89] found id: ""
	I0815 17:58:14.740835  498968 logs.go:276] 2 containers: [ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72 755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef]
	I0815 17:58:14.740892  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.745193  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.749095  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 17:58:14.749226  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 17:58:14.796992  498968 cri.go:89] found id: "c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791"
	I0815 17:58:14.797067  498968 cri.go:89] found id: "cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f"
	I0815 17:58:14.797087  498968 cri.go:89] found id: ""
	I0815 17:58:14.797110  498968 logs.go:276] 2 containers: [c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791 cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f]
	I0815 17:58:14.797229  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.801299  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.805100  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0815 17:58:14.805249  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 17:58:14.859351  498968 cri.go:89] found id: "7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97"
	I0815 17:58:14.859417  498968 cri.go:89] found id: "3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6"
	I0815 17:58:14.859436  498968 cri.go:89] found id: ""
	I0815 17:58:14.859458  498968 logs.go:276] 2 containers: [7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97 3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6]
	I0815 17:58:14.859541  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.863622  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.867511  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 17:58:14.867647  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 17:58:14.912611  498968 cri.go:89] found id: "e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364"
	I0815 17:58:14.912683  498968 cri.go:89] found id: ""
	I0815 17:58:14.912706  498968 logs.go:276] 1 containers: [e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364]
	I0815 17:58:14.912788  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.916954  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0815 17:58:14.917086  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 17:58:14.964100  498968 cri.go:89] found id: "ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95"
	I0815 17:58:14.964176  498968 cri.go:89] found id: "03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47"
	I0815 17:58:14.964195  498968 cri.go:89] found id: ""
	I0815 17:58:14.964216  498968 logs.go:276] 2 containers: [ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95 03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47]
	I0815 17:58:14.964308  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.968707  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.972731  498968 logs.go:123] Gathering logs for coredns [ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568] ...
	I0815 17:58:14.972804  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568"
	I0815 17:58:15.028360  498968 logs.go:123] Gathering logs for kube-proxy [755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef] ...
	I0815 17:58:15.028439  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef"
	I0815 17:58:15.081097  498968 logs.go:123] Gathering logs for kube-controller-manager [c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791] ...
	I0815 17:58:15.081206  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791"
	I0815 17:58:15.147396  498968 logs.go:123] Gathering logs for kube-controller-manager [cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f] ...
	I0815 17:58:15.147430  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f"
	I0815 17:58:15.220542  498968 logs.go:123] Gathering logs for kubelet ...
	I0815 17:58:15.220605  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 17:58:15.292404  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.368840     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.292689  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.368954     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-gftlr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gftlr" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.292922  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.369034     665 reflector.go:138] object-"kube-system"/"kindnet-token-mbwt5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-mbwt5" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.293180  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.369107     665 reflector.go:138] object-"kube-system"/"coredns-token-2p8pb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-2p8pb" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.293419  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.373149     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.293647  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.379215     665 reflector.go:138] object-"default"/"default-token-wlhtd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-wlhtd" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.293893  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.379383     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-2zctk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-2zctk" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.297451  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.450844     665 reflector.go:138] object-"kube-system"/"metrics-server-token-fcq8q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fcq8q" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.305307  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:54 old-k8s-version-460705 kubelet[665]: E0815 17:52:54.440249     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:15.305536  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:54 old-k8s-version-460705 kubelet[665]: E0815 17:52:54.592846     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.308388  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:06 old-k8s-version-460705 kubelet[665]: E0815 17:53:06.270605     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:15.310493  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:16 old-k8s-version-460705 kubelet[665]: E0815 17:53:16.691327     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.310851  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:17 old-k8s-version-460705 kubelet[665]: E0815 17:53:17.695728     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.311382  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:20 old-k8s-version-460705 kubelet[665]: E0815 17:53:20.261914     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.311836  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:22 old-k8s-version-460705 kubelet[665]: E0815 17:53:22.712000     665 pod_workers.go:191] Error syncing pod 821fca20-3432-4c38-b3e8-fdeef57602be ("storage-provisioner_kube-system(821fca20-3432-4c38-b3e8-fdeef57602be)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(821fca20-3432-4c38-b3e8-fdeef57602be)"
	W0815 17:58:15.312202  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:24 old-k8s-version-460705 kubelet[665]: E0815 17:53:24.328211     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.315202  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:34 old-k8s-version-460705 kubelet[665]: E0815 17:53:34.270080     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:15.315814  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:38 old-k8s-version-460705 kubelet[665]: E0815 17:53:38.762716     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.316168  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:44 old-k8s-version-460705 kubelet[665]: E0815 17:53:44.328261     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.316371  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:49 old-k8s-version-460705 kubelet[665]: E0815 17:53:49.269748     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.316718  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:58 old-k8s-version-460705 kubelet[665]: E0815 17:53:58.261622     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.316920  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:03 old-k8s-version-460705 kubelet[665]: E0815 17:54:03.262372     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.317536  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:10 old-k8s-version-460705 kubelet[665]: E0815 17:54:10.861689     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.317903  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:14 old-k8s-version-460705 kubelet[665]: E0815 17:54:14.329009     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.320545  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:15 old-k8s-version-460705 kubelet[665]: E0815 17:54:15.288209     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:15.320977  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:25 old-k8s-version-460705 kubelet[665]: E0815 17:54:25.261679     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.321205  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:28 old-k8s-version-460705 kubelet[665]: E0815 17:54:28.261869     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.321589  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:38 old-k8s-version-460705 kubelet[665]: E0815 17:54:38.261576     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.321809  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:43 old-k8s-version-460705 kubelet[665]: E0815 17:54:43.262643     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.322181  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:50 old-k8s-version-460705 kubelet[665]: E0815 17:54:50.261885     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.322421  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:56 old-k8s-version-460705 kubelet[665]: E0815 17:54:56.261870     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.323117  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:01 old-k8s-version-460705 kubelet[665]: E0815 17:55:01.990218     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.323509  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:04 old-k8s-version-460705 kubelet[665]: E0815 17:55:04.328215     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.323770  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:08 old-k8s-version-460705 kubelet[665]: E0815 17:55:08.262273     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.324210  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:18 old-k8s-version-460705 kubelet[665]: E0815 17:55:18.261614     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.324444  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:19 old-k8s-version-460705 kubelet[665]: E0815 17:55:19.265350     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.324805  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:31 old-k8s-version-460705 kubelet[665]: E0815 17:55:31.265265     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.325015  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:34 old-k8s-version-460705 kubelet[665]: E0815 17:55:34.261857     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.325374  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:44 old-k8s-version-460705 kubelet[665]: E0815 17:55:44.261502     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.327831  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:47 old-k8s-version-460705 kubelet[665]: E0815 17:55:47.276854     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:15.328205  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:58 old-k8s-version-460705 kubelet[665]: E0815 17:55:58.261567     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.328406  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:58 old-k8s-version-460705 kubelet[665]: E0815 17:55:58.262407     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.328759  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:10 old-k8s-version-460705 kubelet[665]: E0815 17:56:10.262030     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.328960  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:13 old-k8s-version-460705 kubelet[665]: E0815 17:56:13.264357     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.329309  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:21 old-k8s-version-460705 kubelet[665]: E0815 17:56:21.261686     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.329536  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:26 old-k8s-version-460705 kubelet[665]: E0815 17:56:26.261880     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.330211  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:33 old-k8s-version-460705 kubelet[665]: E0815 17:56:33.252733     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.330580  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:34 old-k8s-version-460705 kubelet[665]: E0815 17:56:34.328819     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.330783  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:37 old-k8s-version-460705 kubelet[665]: E0815 17:56:37.262397     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.331131  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:45 old-k8s-version-460705 kubelet[665]: E0815 17:56:45.262262     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.331335  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:48 old-k8s-version-460705 kubelet[665]: E0815 17:56:48.262102     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.331681  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:59 old-k8s-version-460705 kubelet[665]: E0815 17:56:59.265417     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.331889  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:59 old-k8s-version-460705 kubelet[665]: E0815 17:56:59.266290     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.332092  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:11 old-k8s-version-460705 kubelet[665]: E0815 17:57:11.264783     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.332440  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:14 old-k8s-version-460705 kubelet[665]: E0815 17:57:14.262035     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.332641  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:25 old-k8s-version-460705 kubelet[665]: E0815 17:57:25.262627     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.332986  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:28 old-k8s-version-460705 kubelet[665]: E0815 17:57:28.261540     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.333219  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:38 old-k8s-version-460705 kubelet[665]: E0815 17:57:38.261980     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.333567  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:41 old-k8s-version-460705 kubelet[665]: E0815 17:57:41.262122     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.333769  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:53 old-k8s-version-460705 kubelet[665]: E0815 17:57:53.262019     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.334115  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:56 old-k8s-version-460705 kubelet[665]: E0815 17:57:56.268331     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.334316  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:04 old-k8s-version-460705 kubelet[665]: E0815 17:58:04.261881     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.334658  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:10 old-k8s-version-460705 kubelet[665]: E0815 17:58:10.261544     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	I0815 17:58:15.334686  498968 logs.go:123] Gathering logs for dmesg ...
	I0815 17:58:15.334713  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:58:15.362329  498968 logs.go:123] Gathering logs for etcd [1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6] ...
	I0815 17:58:15.362354  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6"
	I0815 17:58:15.414974  498968 logs.go:123] Gathering logs for etcd [5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197] ...
	I0815 17:58:15.415054  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197"
	I0815 17:58:15.477762  498968 logs.go:123] Gathering logs for kubernetes-dashboard [e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364] ...
	I0815 17:58:15.477836  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364"
	I0815 17:58:15.526187  498968 logs.go:123] Gathering logs for containerd ...
	I0815 17:58:15.526262  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0815 17:58:15.589379  498968 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:58:15.589454  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:58:15.758032  498968 logs.go:123] Gathering logs for kube-apiserver [898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6] ...
	I0815 17:58:15.758105  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6"
	I0815 17:58:15.844080  498968 logs.go:123] Gathering logs for kube-scheduler [27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666] ...
	I0815 17:58:15.844117  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666"
	I0815 17:58:15.918852  498968 logs.go:123] Gathering logs for kube-proxy [ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72] ...
	I0815 17:58:15.918882  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72"
	I0815 17:58:15.973999  498968 logs.go:123] Gathering logs for storage-provisioner [03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47] ...
	I0815 17:58:15.974026  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47"
	I0815 17:58:16.033478  498968 logs.go:123] Gathering logs for kube-apiserver [66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438] ...
	I0815 17:58:16.033508  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438"
	I0815 17:58:16.116954  498968 logs.go:123] Gathering logs for coredns [bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047] ...
	I0815 17:58:16.116987  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047"
	I0815 17:58:16.164153  498968 logs.go:123] Gathering logs for kube-scheduler [3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065] ...
	I0815 17:58:16.164180  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065"
	I0815 17:58:16.229230  498968 logs.go:123] Gathering logs for kindnet [3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6] ...
	I0815 17:58:16.229259  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6"
	I0815 17:58:16.297535  498968 logs.go:123] Gathering logs for kindnet [7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97] ...
	I0815 17:58:16.297567  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97"
	I0815 17:58:16.383976  498968 logs.go:123] Gathering logs for storage-provisioner [ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95] ...
	I0815 17:58:16.384008  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95"
	I0815 17:58:16.438980  498968 logs.go:123] Gathering logs for container status ...
	I0815 17:58:16.439057  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:58:16.534317  498968 out.go:358] Setting ErrFile to fd 2...
	I0815 17:58:16.534483  498968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0815 17:58:16.534601  498968 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0815 17:58:16.534641  498968 out.go:270]   Aug 15 17:57:41 old-k8s-version-460705 kubelet[665]: E0815 17:57:41.262122     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	  Aug 15 17:57:41 old-k8s-version-460705 kubelet[665]: E0815 17:57:41.262122     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:16.534866  498968 out.go:270]   Aug 15 17:57:53 old-k8s-version-460705 kubelet[665]: E0815 17:57:53.262019     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 15 17:57:53 old-k8s-version-460705 kubelet[665]: E0815 17:57:53.262019     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:16.534903  498968 out.go:270]   Aug 15 17:57:56 old-k8s-version-460705 kubelet[665]: E0815 17:57:56.268331     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	  Aug 15 17:57:56 old-k8s-version-460705 kubelet[665]: E0815 17:57:56.268331     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:16.534977  498968 out.go:270]   Aug 15 17:58:04 old-k8s-version-460705 kubelet[665]: E0815 17:58:04.261881     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 15 17:58:04 old-k8s-version-460705 kubelet[665]: E0815 17:58:04.261881     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:16.535020  498968 out.go:270]   Aug 15 17:58:10 old-k8s-version-460705 kubelet[665]: E0815 17:58:10.261544     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	  Aug 15 17:58:10 old-k8s-version-460705 kubelet[665]: E0815 17:58:10.261544     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	I0815 17:58:16.535049  498968 out.go:358] Setting ErrFile to fd 2...
	I0815 17:58:16.535079  498968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:58:26.536094  498968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:58:26.563627  498968 api_server.go:72] duration metric: took 5m55.298829172s to wait for apiserver process to appear ...
	I0815 17:58:26.563651  498968 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:58:26.563686  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0815 17:58:26.563751  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 17:58:26.696998  498968 cri.go:89] found id: "898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6"
	I0815 17:58:26.697018  498968 cri.go:89] found id: "66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438"
	I0815 17:58:26.697023  498968 cri.go:89] found id: ""
	I0815 17:58:26.697031  498968 logs.go:276] 2 containers: [898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6 66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438]
	I0815 17:58:26.697088  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.702831  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.707420  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0815 17:58:26.707483  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 17:58:26.781317  498968 cri.go:89] found id: "1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6"
	I0815 17:58:26.781340  498968 cri.go:89] found id: "5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197"
	I0815 17:58:26.781345  498968 cri.go:89] found id: ""
	I0815 17:58:26.781352  498968 logs.go:276] 2 containers: [1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6 5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197]
	I0815 17:58:26.781411  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.785343  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.789466  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0815 17:58:26.789529  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 17:58:26.847996  498968 cri.go:89] found id: "ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568"
	I0815 17:58:26.848015  498968 cri.go:89] found id: "bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047"
	I0815 17:58:26.848021  498968 cri.go:89] found id: ""
	I0815 17:58:26.848028  498968 logs.go:276] 2 containers: [ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568 bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047]
	I0815 17:58:26.848085  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.851974  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.857321  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0815 17:58:26.857386  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 17:58:26.911952  498968 cri.go:89] found id: "3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065"
	I0815 17:58:26.911972  498968 cri.go:89] found id: "27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666"
	I0815 17:58:26.911977  498968 cri.go:89] found id: ""
	I0815 17:58:26.911985  498968 logs.go:276] 2 containers: [3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065 27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666]
	I0815 17:58:26.912042  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.919333  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.923192  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0815 17:58:26.923260  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 17:58:26.976029  498968 cri.go:89] found id: "ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72"
	I0815 17:58:26.976050  498968 cri.go:89] found id: "755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef"
	I0815 17:58:26.976054  498968 cri.go:89] found id: ""
	I0815 17:58:26.976062  498968 logs.go:276] 2 containers: [ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72 755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef]
	I0815 17:58:26.976116  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.979965  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.983541  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 17:58:26.983605  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 17:58:27.038212  498968 cri.go:89] found id: "c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791"
	I0815 17:58:27.038232  498968 cri.go:89] found id: "cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f"
	I0815 17:58:27.038237  498968 cri.go:89] found id: ""
	I0815 17:58:27.038245  498968 logs.go:276] 2 containers: [c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791 cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f]
	I0815 17:58:27.038302  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.043119  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.047320  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0815 17:58:27.047450  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 17:58:27.103276  498968 cri.go:89] found id: "7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97"
	I0815 17:58:27.103354  498968 cri.go:89] found id: "3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6"
	I0815 17:58:27.103376  498968 cri.go:89] found id: ""
	I0815 17:58:27.103396  498968 logs.go:276] 2 containers: [7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97 3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6]
	I0815 17:58:27.103477  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.107949  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.112301  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0815 17:58:27.112424  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 17:58:27.166321  498968 cri.go:89] found id: "ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95"
	I0815 17:58:27.166400  498968 cri.go:89] found id: "03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47"
	I0815 17:58:27.166421  498968 cri.go:89] found id: ""
	I0815 17:58:27.166441  498968 logs.go:276] 2 containers: [ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95 03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47]
	I0815 17:58:27.166531  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.170741  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.174840  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 17:58:27.174961  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 17:58:27.260438  498968 cri.go:89] found id: "e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364"
	I0815 17:58:27.260516  498968 cri.go:89] found id: ""
	I0815 17:58:27.260539  498968 logs.go:276] 1 containers: [e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364]
	I0815 17:58:27.260620  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.281246  498968 logs.go:123] Gathering logs for kube-apiserver [898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6] ...
	I0815 17:58:27.281314  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6"
	I0815 17:58:27.382222  498968 logs.go:123] Gathering logs for coredns [bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047] ...
	I0815 17:58:27.382334  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047"
	I0815 17:58:27.443852  498968 logs.go:123] Gathering logs for kubelet ...
	I0815 17:58:27.443880  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 17:58:27.508864  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.368840     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.509913  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.368954     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-gftlr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gftlr" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.510224  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.369034     665 reflector.go:138] object-"kube-system"/"kindnet-token-mbwt5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-mbwt5" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.510502  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.369107     665 reflector.go:138] object-"kube-system"/"coredns-token-2p8pb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-2p8pb" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.510775  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.373149     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.511034  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.379215     665 reflector.go:138] object-"default"/"default-token-wlhtd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-wlhtd" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.511347  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.379383     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-2zctk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-2zctk" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.515568  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.450844     665 reflector.go:138] object-"kube-system"/"metrics-server-token-fcq8q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fcq8q" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.524365  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:54 old-k8s-version-460705 kubelet[665]: E0815 17:52:54.440249     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:27.525040  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:54 old-k8s-version-460705 kubelet[665]: E0815 17:52:54.592846     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.527983  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:06 old-k8s-version-460705 kubelet[665]: E0815 17:53:06.270605     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:27.530125  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:16 old-k8s-version-460705 kubelet[665]: E0815 17:53:16.691327     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.530498  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:17 old-k8s-version-460705 kubelet[665]: E0815 17:53:17.695728     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.531051  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:20 old-k8s-version-460705 kubelet[665]: E0815 17:53:20.261914     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.531531  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:22 old-k8s-version-460705 kubelet[665]: E0815 17:53:22.712000     665 pod_workers.go:191] Error syncing pod 821fca20-3432-4c38-b3e8-fdeef57602be ("storage-provisioner_kube-system(821fca20-3432-4c38-b3e8-fdeef57602be)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(821fca20-3432-4c38-b3e8-fdeef57602be)"
	W0815 17:58:27.531897  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:24 old-k8s-version-460705 kubelet[665]: E0815 17:53:24.328211     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.536011  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:34 old-k8s-version-460705 kubelet[665]: E0815 17:53:34.270080     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:27.536704  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:38 old-k8s-version-460705 kubelet[665]: E0815 17:53:38.762716     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.537093  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:44 old-k8s-version-460705 kubelet[665]: E0815 17:53:44.328261     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.537369  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:49 old-k8s-version-460705 kubelet[665]: E0815 17:53:49.269748     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.537779  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:58 old-k8s-version-460705 kubelet[665]: E0815 17:53:58.261622     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.538015  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:03 old-k8s-version-460705 kubelet[665]: E0815 17:54:03.262372     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.538689  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:10 old-k8s-version-460705 kubelet[665]: E0815 17:54:10.861689     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.539111  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:14 old-k8s-version-460705 kubelet[665]: E0815 17:54:14.329009     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.541883  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:15 old-k8s-version-460705 kubelet[665]: E0815 17:54:15.288209     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:27.542249  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:25 old-k8s-version-460705 kubelet[665]: E0815 17:54:25.261679     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.542452  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:28 old-k8s-version-460705 kubelet[665]: E0815 17:54:28.261869     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.542855  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:38 old-k8s-version-460705 kubelet[665]: E0815 17:54:38.261576     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.543057  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:43 old-k8s-version-460705 kubelet[665]: E0815 17:54:43.262643     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.543413  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:50 old-k8s-version-460705 kubelet[665]: E0815 17:54:50.261885     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.543608  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:56 old-k8s-version-460705 kubelet[665]: E0815 17:54:56.261870     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.544268  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:01 old-k8s-version-460705 kubelet[665]: E0815 17:55:01.990218     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.544656  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:04 old-k8s-version-460705 kubelet[665]: E0815 17:55:04.328215     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.544855  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:08 old-k8s-version-460705 kubelet[665]: E0815 17:55:08.262273     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.545645  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:18 old-k8s-version-460705 kubelet[665]: E0815 17:55:18.261614     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.545913  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:19 old-k8s-version-460705 kubelet[665]: E0815 17:55:19.265350     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.546294  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:31 old-k8s-version-460705 kubelet[665]: E0815 17:55:31.265265     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.546521  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:34 old-k8s-version-460705 kubelet[665]: E0815 17:55:34.261857     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.546904  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:44 old-k8s-version-460705 kubelet[665]: E0815 17:55:44.261502     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.549708  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:47 old-k8s-version-460705 kubelet[665]: E0815 17:55:47.276854     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:27.550131  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:58 old-k8s-version-460705 kubelet[665]: E0815 17:55:58.261567     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.550378  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:58 old-k8s-version-460705 kubelet[665]: E0815 17:55:58.262407     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.550761  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:10 old-k8s-version-460705 kubelet[665]: E0815 17:56:10.262030     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.550999  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:13 old-k8s-version-460705 kubelet[665]: E0815 17:56:13.264357     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.551380  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:21 old-k8s-version-460705 kubelet[665]: E0815 17:56:21.261686     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.551614  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:26 old-k8s-version-460705 kubelet[665]: E0815 17:56:26.261880     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.552280  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:33 old-k8s-version-460705 kubelet[665]: E0815 17:56:33.252733     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.552652  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:34 old-k8s-version-460705 kubelet[665]: E0815 17:56:34.328819     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.552875  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:37 old-k8s-version-460705 kubelet[665]: E0815 17:56:37.262397     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.553282  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:45 old-k8s-version-460705 kubelet[665]: E0815 17:56:45.262262     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.553505  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:48 old-k8s-version-460705 kubelet[665]: E0815 17:56:48.262102     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.553895  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:59 old-k8s-version-460705 kubelet[665]: E0815 17:56:59.265417     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.554117  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:59 old-k8s-version-460705 kubelet[665]: E0815 17:56:59.266290     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.555099  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:11 old-k8s-version-460705 kubelet[665]: E0815 17:57:11.264783     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.555524  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:14 old-k8s-version-460705 kubelet[665]: E0815 17:57:14.262035     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.555737  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:25 old-k8s-version-460705 kubelet[665]: E0815 17:57:25.262627     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.556094  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:28 old-k8s-version-460705 kubelet[665]: E0815 17:57:28.261540     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.556308  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:38 old-k8s-version-460705 kubelet[665]: E0815 17:57:38.261980     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.556739  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:41 old-k8s-version-460705 kubelet[665]: E0815 17:57:41.262122     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.556976  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:53 old-k8s-version-460705 kubelet[665]: E0815 17:57:53.262019     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.557358  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:56 old-k8s-version-460705 kubelet[665]: E0815 17:57:56.268331     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.557579  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:04 old-k8s-version-460705 kubelet[665]: E0815 17:58:04.261881     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.557962  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:10 old-k8s-version-460705 kubelet[665]: E0815 17:58:10.261544     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.558215  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:15 old-k8s-version-460705 kubelet[665]: E0815 17:58:15.265484     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.558592  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:24 old-k8s-version-460705 kubelet[665]: E0815 17:58:24.261897     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.558906  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:26 old-k8s-version-460705 kubelet[665]: E0815 17:58:26.262340     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0815 17:58:27.558923  498968 logs.go:123] Gathering logs for kube-scheduler [3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065] ...
	I0815 17:58:27.558948  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065"
	I0815 17:58:27.612976  498968 logs.go:123] Gathering logs for kube-proxy [755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef] ...
	I0815 17:58:27.613004  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef"
	I0815 17:58:27.670623  498968 logs.go:123] Gathering logs for storage-provisioner [ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95] ...
	I0815 17:58:27.670648  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95"
	I0815 17:58:27.737187  498968 logs.go:123] Gathering logs for kubernetes-dashboard [e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364] ...
	I0815 17:58:27.737215  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364"
	I0815 17:58:27.795544  498968 logs.go:123] Gathering logs for dmesg ...
	I0815 17:58:27.795570  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:58:27.814027  498968 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:58:27.814054  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:58:27.991678  498968 logs.go:123] Gathering logs for kube-apiserver [66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438] ...
	I0815 17:58:27.991756  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438"
	I0815 17:58:28.062006  498968 logs.go:123] Gathering logs for etcd [1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6] ...
	I0815 17:58:28.062049  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6"
	I0815 17:58:28.126014  498968 logs.go:123] Gathering logs for kube-scheduler [27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666] ...
	I0815 17:58:28.126047  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666"
	I0815 17:58:28.183679  498968 logs.go:123] Gathering logs for kube-controller-manager [cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f] ...
	I0815 17:58:28.183712  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f"
	I0815 17:58:28.277954  498968 logs.go:123] Gathering logs for kindnet [7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97] ...
	I0815 17:58:28.277989  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97"
	I0815 17:58:28.370222  498968 logs.go:123] Gathering logs for storage-provisioner [03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47] ...
	I0815 17:58:28.370260  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47"
	I0815 17:58:28.438754  498968 logs.go:123] Gathering logs for etcd [5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197] ...
	I0815 17:58:28.438786  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197"
	I0815 17:58:28.489041  498968 logs.go:123] Gathering logs for coredns [ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568] ...
	I0815 17:58:28.489071  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568"
	I0815 17:58:28.561899  498968 logs.go:123] Gathering logs for kube-proxy [ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72] ...
	I0815 17:58:28.561931  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72"
	I0815 17:58:28.610927  498968 logs.go:123] Gathering logs for kube-controller-manager [c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791] ...
	I0815 17:58:28.610963  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791"
	I0815 17:58:28.715451  498968 logs.go:123] Gathering logs for kindnet [3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6] ...
	I0815 17:58:28.715487  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6"
	I0815 17:58:28.790001  498968 logs.go:123] Gathering logs for containerd ...
	I0815 17:58:28.790036  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0815 17:58:28.859683  498968 logs.go:123] Gathering logs for container status ...
	I0815 17:58:28.859723  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:58:28.935572  498968 out.go:358] Setting ErrFile to fd 2...
	I0815 17:58:28.935598  498968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0815 17:58:28.935690  498968 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0815 17:58:28.935709  498968 out.go:270]   Aug 15 17:58:04 old-k8s-version-460705 kubelet[665]: E0815 17:58:04.261881     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 15 17:58:04 old-k8s-version-460705 kubelet[665]: E0815 17:58:04.261881     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:28.935715  498968 out.go:270]   Aug 15 17:58:10 old-k8s-version-460705 kubelet[665]: E0815 17:58:10.261544     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	  Aug 15 17:58:10 old-k8s-version-460705 kubelet[665]: E0815 17:58:10.261544     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:28.935903  498968 out.go:270]   Aug 15 17:58:15 old-k8s-version-460705 kubelet[665]: E0815 17:58:15.265484     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 15 17:58:15 old-k8s-version-460705 kubelet[665]: E0815 17:58:15.265484     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:28.935919  498968 out.go:270]   Aug 15 17:58:24 old-k8s-version-460705 kubelet[665]: E0815 17:58:24.261897     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	  Aug 15 17:58:24 old-k8s-version-460705 kubelet[665]: E0815 17:58:24.261897     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:28.935935  498968 out.go:270]   Aug 15 17:58:26 old-k8s-version-460705 kubelet[665]: E0815 17:58:26.262340     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 15 17:58:26 old-k8s-version-460705 kubelet[665]: E0815 17:58:26.262340     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0815 17:58:28.935941  498968 out.go:358] Setting ErrFile to fd 2...
	I0815 17:58:28.935949  498968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:58:38.937241  498968 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0815 17:58:38.947104  498968 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0815 17:58:38.949298  498968 out.go:201] 
	W0815 17:58:38.951695  498968 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0815 17:58:38.951736  498968 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0815 17:58:38.951759  498968 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0815 17:58:38.951774  498968 out.go:270] * 
	* 
	W0815 17:58:38.952821  498968 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:58:38.957549  498968 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-460705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-460705
helpers_test.go:235: (dbg) docker inspect old-k8s-version-460705:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bcf082b35a39cdf932cfb7cd17b4624ef228d8b0840b4e3dc7cdfbb02d1482b3",
	        "Created": "2024-08-15T17:49:23.91002999Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 499269,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-15T17:52:24.0227903Z",
	            "FinishedAt": "2024-08-15T17:52:22.798270177Z"
	        },
	        "Image": "sha256:2b339a1cac4376103734d3066f7ccdf0ac7377a2f8f8d5eb9e81c29f3abcec50",
	        "ResolvConfPath": "/var/lib/docker/containers/bcf082b35a39cdf932cfb7cd17b4624ef228d8b0840b4e3dc7cdfbb02d1482b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bcf082b35a39cdf932cfb7cd17b4624ef228d8b0840b4e3dc7cdfbb02d1482b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/bcf082b35a39cdf932cfb7cd17b4624ef228d8b0840b4e3dc7cdfbb02d1482b3/hosts",
	        "LogPath": "/var/lib/docker/containers/bcf082b35a39cdf932cfb7cd17b4624ef228d8b0840b4e3dc7cdfbb02d1482b3/bcf082b35a39cdf932cfb7cd17b4624ef228d8b0840b4e3dc7cdfbb02d1482b3-json.log",
	        "Name": "/old-k8s-version-460705",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-460705:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-460705",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d137eb218c203a66ae74c12c120ced8b9ed952101c025b893bc5bb63727dd692-init/diff:/var/lib/docker/overlay2/a163b16fa32e47fd7ab2fe98717ea5e008831d97c60d714c2328532bf1d6d774/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d137eb218c203a66ae74c12c120ced8b9ed952101c025b893bc5bb63727dd692/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d137eb218c203a66ae74c12c120ced8b9ed952101c025b893bc5bb63727dd692/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d137eb218c203a66ae74c12c120ced8b9ed952101c025b893bc5bb63727dd692/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-460705",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-460705/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-460705",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-460705",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-460705",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e008eb8127f20b0505d105505767b78163cdf718c3236045052c138719ea429d",
	            "SandboxKey": "/var/run/docker/netns/e008eb8127f2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-460705": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f5c2faf7462b41e38218da19675dc7b8fdb1dea069e8863fa1252454e15e3fce",
	                    "EndpointID": "5e152b56a9bdd95841bc79be06d56763d39ad2152b7ab47ad1c74fb51828b0c1",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-460705",
	                        "bcf082b35a39"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-460705 -n old-k8s-version-460705
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-460705 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-460705 logs -n 25: (3.008466534s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-900222                              | cert-expiration-900222   | jenkins | v1.33.1 | 15 Aug 24 17:48 UTC | 15 Aug 24 17:48 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-814095                               | force-systemd-env-814095 | jenkins | v1.33.1 | 15 Aug 24 17:48 UTC | 15 Aug 24 17:48 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-814095                            | force-systemd-env-814095 | jenkins | v1.33.1 | 15 Aug 24 17:48 UTC | 15 Aug 24 17:48 UTC |
	| start   | -p cert-options-559985                                 | cert-options-559985      | jenkins | v1.33.1 | 15 Aug 24 17:48 UTC | 15 Aug 24 17:49 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-559985 ssh                                | cert-options-559985      | jenkins | v1.33.1 | 15 Aug 24 17:49 UTC | 15 Aug 24 17:49 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-559985 -- sudo                         | cert-options-559985      | jenkins | v1.33.1 | 15 Aug 24 17:49 UTC | 15 Aug 24 17:49 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-559985                                 | cert-options-559985      | jenkins | v1.33.1 | 15 Aug 24 17:49 UTC | 15 Aug 24 17:49 UTC |
	| start   | -p old-k8s-version-460705                              | old-k8s-version-460705   | jenkins | v1.33.1 | 15 Aug 24 17:49 UTC | 15 Aug 24 17:51 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-900222                              | cert-expiration-900222   | jenkins | v1.33.1 | 15 Aug 24 17:51 UTC | 15 Aug 24 17:51 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-900222                              | cert-expiration-900222   | jenkins | v1.33.1 | 15 Aug 24 17:51 UTC | 15 Aug 24 17:51 UTC |
	| start   | -p no-preload-794171                                   | no-preload-794171        | jenkins | v1.33.1 | 15 Aug 24 17:51 UTC | 15 Aug 24 17:53 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-460705        | old-k8s-version-460705   | jenkins | v1.33.1 | 15 Aug 24 17:52 UTC | 15 Aug 24 17:52 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-460705                              | old-k8s-version-460705   | jenkins | v1.33.1 | 15 Aug 24 17:52 UTC | 15 Aug 24 17:52 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-460705             | old-k8s-version-460705   | jenkins | v1.33.1 | 15 Aug 24 17:52 UTC | 15 Aug 24 17:52 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-460705                              | old-k8s-version-460705   | jenkins | v1.33.1 | 15 Aug 24 17:52 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-794171             | no-preload-794171        | jenkins | v1.33.1 | 15 Aug 24 17:53 UTC | 15 Aug 24 17:53 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-794171                                   | no-preload-794171        | jenkins | v1.33.1 | 15 Aug 24 17:53 UTC | 15 Aug 24 17:53 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-794171                  | no-preload-794171        | jenkins | v1.33.1 | 15 Aug 24 17:53 UTC | 15 Aug 24 17:53 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-794171                                   | no-preload-794171        | jenkins | v1.33.1 | 15 Aug 24 17:53 UTC | 15 Aug 24 17:57 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	| image   | no-preload-794171 image list                           | no-preload-794171        | jenkins | v1.33.1 | 15 Aug 24 17:58 UTC | 15 Aug 24 17:58 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-794171                                   | no-preload-794171        | jenkins | v1.33.1 | 15 Aug 24 17:58 UTC | 15 Aug 24 17:58 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-794171                                   | no-preload-794171        | jenkins | v1.33.1 | 15 Aug 24 17:58 UTC | 15 Aug 24 17:58 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-794171                                   | no-preload-794171        | jenkins | v1.33.1 | 15 Aug 24 17:58 UTC | 15 Aug 24 17:58 UTC |
	| delete  | -p no-preload-794171                                   | no-preload-794171        | jenkins | v1.33.1 | 15 Aug 24 17:58 UTC | 15 Aug 24 17:58 UTC |
	| start   | -p embed-certs-918291                                  | embed-certs-918291       | jenkins | v1.33.1 | 15 Aug 24 17:58 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:58:13
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:58:13.109667  509696 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:58:13.109859  509696 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:58:13.109891  509696 out.go:358] Setting ErrFile to fd 2...
	I0815 17:58:13.109911  509696 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:58:13.110179  509696 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
	I0815 17:58:13.110642  509696 out.go:352] Setting JSON to false
	I0815 17:58:13.111677  509696 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9636,"bootTime":1723735057,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 17:58:13.111786  509696 start.go:139] virtualization:  
	I0815 17:58:13.115238  509696 out.go:177] * [embed-certs-918291] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 17:58:13.116946  509696 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:58:13.117031  509696 notify.go:220] Checking for updates...
	I0815 17:58:13.120720  509696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:58:13.122570  509696 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	I0815 17:58:13.124100  509696 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	I0815 17:58:13.125838  509696 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 17:58:13.127373  509696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:58:13.129722  509696 config.go:182] Loaded profile config "old-k8s-version-460705": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0815 17:58:13.129883  509696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:58:13.158699  509696 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:58:13.158812  509696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:58:13.221776  509696 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 17:58:13.205767986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:58:13.221884  509696 docker.go:307] overlay module found
	I0815 17:58:13.223726  509696 out.go:177] * Using the docker driver based on user configuration
	I0815 17:58:13.225478  509696 start.go:297] selected driver: docker
	I0815 17:58:13.225512  509696 start.go:901] validating driver "docker" against <nil>
	I0815 17:58:13.225527  509696 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:58:13.226123  509696 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:58:13.295476  509696 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-15 17:58:13.285795173 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:58:13.295649  509696 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:58:13.295894  509696 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0815 17:58:13.297768  509696 out.go:177] * Using Docker driver with root privileges
	I0815 17:58:13.299508  509696 cni.go:84] Creating CNI manager for ""
	I0815 17:58:13.299536  509696 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 17:58:13.299549  509696 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 17:58:13.299627  509696 start.go:340] cluster config:
	{Name:embed-certs-918291 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-918291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:58:13.304360  509696 out.go:177] * Starting "embed-certs-918291" primary control-plane node in "embed-certs-918291" cluster
	I0815 17:58:13.306241  509696 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0815 17:58:13.307825  509696 out.go:177] * Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:58:09.895933  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:58:12.394175  498968 pod_ready.go:103] pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace has status "Ready":"False"
	I0815 17:58:13.309557  509696 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 17:58:13.309613  509696 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0815 17:58:13.309626  509696 cache.go:56] Caching tarball of preloaded images
	I0815 17:58:13.309627  509696 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 17:58:13.309730  509696 preload.go:172] Found /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0815 17:58:13.309741  509696 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0815 17:58:13.309860  509696 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/config.json ...
	I0815 17:58:13.309889  509696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/config.json: {Name:mkfdccb96854675c60806a08881dcc98c103a228 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0815 17:58:13.327663  509696 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 is of wrong architecture
	I0815 17:58:13.327687  509696 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:58:13.327773  509696 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:58:13.327797  509696 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 17:58:13.327805  509696 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 17:58:13.327813  509696 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:58:13.327819  509696 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from local cache
	I0815 17:58:13.457320  509696 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 from cached tarball
	I0815 17:58:13.457361  509696 cache.go:194] Successfully downloaded all kic artifacts
	I0815 17:58:13.457402  509696 start.go:360] acquireMachinesLock for embed-certs-918291: {Name:mke3451823600d6b32dd7ba67b3f710bfab79a8a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0815 17:58:13.457516  509696 start.go:364] duration metric: took 91.97µs to acquireMachinesLock for "embed-certs-918291"
	I0815 17:58:13.457547  509696 start.go:93] Provisioning new machine with config: &{Name:embed-certs-918291 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-918291 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0815 17:58:13.457633  509696 start.go:125] createHost starting for "" (driver="docker")
	I0815 17:58:13.459903  509696 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0815 17:58:13.460149  509696 start.go:159] libmachine.API.Create for "embed-certs-918291" (driver="docker")
	I0815 17:58:13.460184  509696 client.go:168] LocalClient.Create starting
	I0815 17:58:13.460253  509696 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem
	I0815 17:58:13.460326  509696 main.go:141] libmachine: Decoding PEM data...
	I0815 17:58:13.460344  509696 main.go:141] libmachine: Parsing certificate...
	I0815 17:58:13.460383  509696 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19450-292730/.minikube/certs/cert.pem
	I0815 17:58:13.460403  509696 main.go:141] libmachine: Decoding PEM data...
	I0815 17:58:13.460416  509696 main.go:141] libmachine: Parsing certificate...
	I0815 17:58:13.460788  509696 cli_runner.go:164] Run: docker network inspect embed-certs-918291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0815 17:58:13.476632  509696 cli_runner.go:211] docker network inspect embed-certs-918291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0815 17:58:13.476729  509696 network_create.go:284] running [docker network inspect embed-certs-918291] to gather additional debugging logs...
	I0815 17:58:13.476754  509696 cli_runner.go:164] Run: docker network inspect embed-certs-918291
	W0815 17:58:13.491092  509696 cli_runner.go:211] docker network inspect embed-certs-918291 returned with exit code 1
	I0815 17:58:13.491123  509696 network_create.go:287] error running [docker network inspect embed-certs-918291]: docker network inspect embed-certs-918291: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-918291 not found
	I0815 17:58:13.491136  509696 network_create.go:289] output of [docker network inspect embed-certs-918291]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-918291 not found
	
	** /stderr **
	I0815 17:58:13.491246  509696 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:58:13.508824  509696 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3249d8627ad3 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:19:18:27:b7} reservation:<nil>}
	I0815 17:58:13.509353  509696 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-74be2adc41a3 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:3d:11:a4:87} reservation:<nil>}
	I0815 17:58:13.509717  509696 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-c477c57d441e IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:3c:11:1e:34} reservation:<nil>}
	I0815 17:58:13.510114  509696 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-f5c2faf7462b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:f4:d5:9d:37} reservation:<nil>}
	I0815 17:58:13.510640  509696 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017e9ca0}
	I0815 17:58:13.510687  509696 network_create.go:124] attempt to create docker network embed-certs-918291 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0815 17:58:13.510768  509696 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-918291 embed-certs-918291
	I0815 17:58:13.581846  509696 network_create.go:108] docker network embed-certs-918291 192.168.85.0/24 created
	I0815 17:58:13.581887  509696 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-918291" container
	I0815 17:58:13.581980  509696 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0815 17:58:13.598009  509696 cli_runner.go:164] Run: docker volume create embed-certs-918291 --label name.minikube.sigs.k8s.io=embed-certs-918291 --label created_by.minikube.sigs.k8s.io=true
	I0815 17:58:13.614799  509696 oci.go:103] Successfully created a docker volume embed-certs-918291
	I0815 17:58:13.614885  509696 cli_runner.go:164] Run: docker run --rm --name embed-certs-918291-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-918291 --entrypoint /usr/bin/test -v embed-certs-918291:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -d /var/lib
	I0815 17:58:14.265924  509696 oci.go:107] Successfully prepared a docker volume embed-certs-918291
	I0815 17:58:14.265971  509696 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 17:58:14.265993  509696 kic.go:194] Starting extracting preloaded images to volume ...
	I0815 17:58:14.266091  509696 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-918291:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir
	I0815 17:58:14.392131  498968 pod_ready.go:82] duration metric: took 4m0.007420447s for pod "metrics-server-9975d5f86-wd4q2" in "kube-system" namespace to be "Ready" ...
	E0815 17:58:14.392156  498968 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0815 17:58:14.392164  498968 pod_ready.go:39] duration metric: took 5m23.841477652s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0815 17:58:14.392180  498968 api_server.go:52] waiting for apiserver process to appear ...
	I0815 17:58:14.392210  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0815 17:58:14.392263  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 17:58:14.466245  498968 cri.go:89] found id: "898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6"
	I0815 17:58:14.466265  498968 cri.go:89] found id: "66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438"
	I0815 17:58:14.466270  498968 cri.go:89] found id: ""
	I0815 17:58:14.466277  498968 logs.go:276] 2 containers: [898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6 66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438]
	I0815 17:58:14.466332  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.472712  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.476671  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0815 17:58:14.476743  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 17:58:14.569437  498968 cri.go:89] found id: "1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6"
	I0815 17:58:14.569457  498968 cri.go:89] found id: "5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197"
	I0815 17:58:14.569461  498968 cri.go:89] found id: ""
	I0815 17:58:14.569468  498968 logs.go:276] 2 containers: [1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6 5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197]
	I0815 17:58:14.569528  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.573756  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.577642  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0815 17:58:14.577709  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 17:58:14.627733  498968 cri.go:89] found id: "ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568"
	I0815 17:58:14.627752  498968 cri.go:89] found id: "bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047"
	I0815 17:58:14.627756  498968 cri.go:89] found id: ""
	I0815 17:58:14.627764  498968 logs.go:276] 2 containers: [ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568 bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047]
	I0815 17:58:14.627816  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.631979  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.635941  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0815 17:58:14.636058  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 17:58:14.688370  498968 cri.go:89] found id: "3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065"
	I0815 17:58:14.688448  498968 cri.go:89] found id: "27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666"
	I0815 17:58:14.688467  498968 cri.go:89] found id: ""
	I0815 17:58:14.688511  498968 logs.go:276] 2 containers: [3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065 27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666]
	I0815 17:58:14.688603  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.692767  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.696849  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0815 17:58:14.696924  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 17:58:14.740801  498968 cri.go:89] found id: "ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72"
	I0815 17:58:14.740823  498968 cri.go:89] found id: "755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef"
	I0815 17:58:14.740828  498968 cri.go:89] found id: ""
	I0815 17:58:14.740835  498968 logs.go:276] 2 containers: [ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72 755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef]
	I0815 17:58:14.740892  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.745193  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.749095  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 17:58:14.749226  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 17:58:14.796992  498968 cri.go:89] found id: "c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791"
	I0815 17:58:14.797067  498968 cri.go:89] found id: "cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f"
	I0815 17:58:14.797087  498968 cri.go:89] found id: ""
	I0815 17:58:14.797110  498968 logs.go:276] 2 containers: [c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791 cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f]
	I0815 17:58:14.797229  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.801299  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.805100  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0815 17:58:14.805249  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 17:58:14.859351  498968 cri.go:89] found id: "7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97"
	I0815 17:58:14.859417  498968 cri.go:89] found id: "3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6"
	I0815 17:58:14.859436  498968 cri.go:89] found id: ""
	I0815 17:58:14.859458  498968 logs.go:276] 2 containers: [7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97 3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6]
	I0815 17:58:14.859541  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.863622  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.867511  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 17:58:14.867647  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 17:58:14.912611  498968 cri.go:89] found id: "e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364"
	I0815 17:58:14.912683  498968 cri.go:89] found id: ""
	I0815 17:58:14.912706  498968 logs.go:276] 1 containers: [e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364]
	I0815 17:58:14.912788  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.916954  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0815 17:58:14.917086  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 17:58:14.964100  498968 cri.go:89] found id: "ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95"
	I0815 17:58:14.964176  498968 cri.go:89] found id: "03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47"
	I0815 17:58:14.964195  498968 cri.go:89] found id: ""
	I0815 17:58:14.964216  498968 logs.go:276] 2 containers: [ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95 03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47]
	I0815 17:58:14.964308  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.968707  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:14.972731  498968 logs.go:123] Gathering logs for coredns [ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568] ...
	I0815 17:58:14.972804  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568"
	I0815 17:58:15.028360  498968 logs.go:123] Gathering logs for kube-proxy [755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef] ...
	I0815 17:58:15.028439  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef"
	I0815 17:58:15.081097  498968 logs.go:123] Gathering logs for kube-controller-manager [c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791] ...
	I0815 17:58:15.081206  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791"
	I0815 17:58:15.147396  498968 logs.go:123] Gathering logs for kube-controller-manager [cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f] ...
	I0815 17:58:15.147430  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f"
	I0815 17:58:15.220542  498968 logs.go:123] Gathering logs for kubelet ...
	I0815 17:58:15.220605  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 17:58:15.292404  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.368840     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.292689  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.368954     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-gftlr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gftlr" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.292922  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.369034     665 reflector.go:138] object-"kube-system"/"kindnet-token-mbwt5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-mbwt5" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.293180  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.369107     665 reflector.go:138] object-"kube-system"/"coredns-token-2p8pb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-2p8pb" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.293419  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.373149     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.293647  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.379215     665 reflector.go:138] object-"default"/"default-token-wlhtd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-wlhtd" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.293893  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.379383     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-2zctk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-2zctk" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.297451  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.450844     665 reflector.go:138] object-"kube-system"/"metrics-server-token-fcq8q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fcq8q" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:15.305307  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:54 old-k8s-version-460705 kubelet[665]: E0815 17:52:54.440249     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:15.305536  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:54 old-k8s-version-460705 kubelet[665]: E0815 17:52:54.592846     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.308388  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:06 old-k8s-version-460705 kubelet[665]: E0815 17:53:06.270605     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:15.310493  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:16 old-k8s-version-460705 kubelet[665]: E0815 17:53:16.691327     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.310851  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:17 old-k8s-version-460705 kubelet[665]: E0815 17:53:17.695728     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.311382  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:20 old-k8s-version-460705 kubelet[665]: E0815 17:53:20.261914     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.311836  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:22 old-k8s-version-460705 kubelet[665]: E0815 17:53:22.712000     665 pod_workers.go:191] Error syncing pod 821fca20-3432-4c38-b3e8-fdeef57602be ("storage-provisioner_kube-system(821fca20-3432-4c38-b3e8-fdeef57602be)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(821fca20-3432-4c38-b3e8-fdeef57602be)"
	W0815 17:58:15.312202  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:24 old-k8s-version-460705 kubelet[665]: E0815 17:53:24.328211     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.315202  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:34 old-k8s-version-460705 kubelet[665]: E0815 17:53:34.270080     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:15.315814  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:38 old-k8s-version-460705 kubelet[665]: E0815 17:53:38.762716     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.316168  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:44 old-k8s-version-460705 kubelet[665]: E0815 17:53:44.328261     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.316371  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:49 old-k8s-version-460705 kubelet[665]: E0815 17:53:49.269748     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.316718  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:58 old-k8s-version-460705 kubelet[665]: E0815 17:53:58.261622     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.316920  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:03 old-k8s-version-460705 kubelet[665]: E0815 17:54:03.262372     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.317536  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:10 old-k8s-version-460705 kubelet[665]: E0815 17:54:10.861689     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.317903  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:14 old-k8s-version-460705 kubelet[665]: E0815 17:54:14.329009     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.320545  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:15 old-k8s-version-460705 kubelet[665]: E0815 17:54:15.288209     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:15.320977  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:25 old-k8s-version-460705 kubelet[665]: E0815 17:54:25.261679     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.321205  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:28 old-k8s-version-460705 kubelet[665]: E0815 17:54:28.261869     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.321589  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:38 old-k8s-version-460705 kubelet[665]: E0815 17:54:38.261576     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.321809  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:43 old-k8s-version-460705 kubelet[665]: E0815 17:54:43.262643     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.322181  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:50 old-k8s-version-460705 kubelet[665]: E0815 17:54:50.261885     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.322421  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:56 old-k8s-version-460705 kubelet[665]: E0815 17:54:56.261870     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.323117  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:01 old-k8s-version-460705 kubelet[665]: E0815 17:55:01.990218     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.323509  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:04 old-k8s-version-460705 kubelet[665]: E0815 17:55:04.328215     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.323770  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:08 old-k8s-version-460705 kubelet[665]: E0815 17:55:08.262273     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.324210  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:18 old-k8s-version-460705 kubelet[665]: E0815 17:55:18.261614     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.324444  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:19 old-k8s-version-460705 kubelet[665]: E0815 17:55:19.265350     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.324805  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:31 old-k8s-version-460705 kubelet[665]: E0815 17:55:31.265265     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.325015  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:34 old-k8s-version-460705 kubelet[665]: E0815 17:55:34.261857     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.325374  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:44 old-k8s-version-460705 kubelet[665]: E0815 17:55:44.261502     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.327831  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:47 old-k8s-version-460705 kubelet[665]: E0815 17:55:47.276854     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:15.328205  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:58 old-k8s-version-460705 kubelet[665]: E0815 17:55:58.261567     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.328406  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:58 old-k8s-version-460705 kubelet[665]: E0815 17:55:58.262407     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.328759  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:10 old-k8s-version-460705 kubelet[665]: E0815 17:56:10.262030     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.328960  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:13 old-k8s-version-460705 kubelet[665]: E0815 17:56:13.264357     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.329309  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:21 old-k8s-version-460705 kubelet[665]: E0815 17:56:21.261686     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.329536  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:26 old-k8s-version-460705 kubelet[665]: E0815 17:56:26.261880     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.330211  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:33 old-k8s-version-460705 kubelet[665]: E0815 17:56:33.252733     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.330580  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:34 old-k8s-version-460705 kubelet[665]: E0815 17:56:34.328819     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.330783  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:37 old-k8s-version-460705 kubelet[665]: E0815 17:56:37.262397     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.331131  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:45 old-k8s-version-460705 kubelet[665]: E0815 17:56:45.262262     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.331335  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:48 old-k8s-version-460705 kubelet[665]: E0815 17:56:48.262102     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.331681  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:59 old-k8s-version-460705 kubelet[665]: E0815 17:56:59.265417     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.331889  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:59 old-k8s-version-460705 kubelet[665]: E0815 17:56:59.266290     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.332092  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:11 old-k8s-version-460705 kubelet[665]: E0815 17:57:11.264783     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.332440  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:14 old-k8s-version-460705 kubelet[665]: E0815 17:57:14.262035     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.332641  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:25 old-k8s-version-460705 kubelet[665]: E0815 17:57:25.262627     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.332986  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:28 old-k8s-version-460705 kubelet[665]: E0815 17:57:28.261540     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.333219  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:38 old-k8s-version-460705 kubelet[665]: E0815 17:57:38.261980     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.333567  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:41 old-k8s-version-460705 kubelet[665]: E0815 17:57:41.262122     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.333769  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:53 old-k8s-version-460705 kubelet[665]: E0815 17:57:53.262019     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.334115  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:56 old-k8s-version-460705 kubelet[665]: E0815 17:57:56.268331     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:15.334316  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:04 old-k8s-version-460705 kubelet[665]: E0815 17:58:04.261881     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:15.334658  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:10 old-k8s-version-460705 kubelet[665]: E0815 17:58:10.261544     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	I0815 17:58:15.334686  498968 logs.go:123] Gathering logs for dmesg ...
	I0815 17:58:15.334713  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:58:15.362329  498968 logs.go:123] Gathering logs for etcd [1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6] ...
	I0815 17:58:15.362354  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6"
	I0815 17:58:15.414974  498968 logs.go:123] Gathering logs for etcd [5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197] ...
	I0815 17:58:15.415054  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197"
	I0815 17:58:15.477762  498968 logs.go:123] Gathering logs for kubernetes-dashboard [e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364] ...
	I0815 17:58:15.477836  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364"
	I0815 17:58:15.526187  498968 logs.go:123] Gathering logs for containerd ...
	I0815 17:58:15.526262  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0815 17:58:15.589379  498968 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:58:15.589454  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:58:15.758032  498968 logs.go:123] Gathering logs for kube-apiserver [898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6] ...
	I0815 17:58:15.758105  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6"
	I0815 17:58:15.844080  498968 logs.go:123] Gathering logs for kube-scheduler [27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666] ...
	I0815 17:58:15.844117  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666"
	I0815 17:58:15.918852  498968 logs.go:123] Gathering logs for kube-proxy [ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72] ...
	I0815 17:58:15.918882  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72"
	I0815 17:58:15.973999  498968 logs.go:123] Gathering logs for storage-provisioner [03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47] ...
	I0815 17:58:15.974026  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47"
	I0815 17:58:16.033478  498968 logs.go:123] Gathering logs for kube-apiserver [66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438] ...
	I0815 17:58:16.033508  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438"
	I0815 17:58:16.116954  498968 logs.go:123] Gathering logs for coredns [bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047] ...
	I0815 17:58:16.116987  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047"
	I0815 17:58:16.164153  498968 logs.go:123] Gathering logs for kube-scheduler [3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065] ...
	I0815 17:58:16.164180  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065"
	I0815 17:58:16.229230  498968 logs.go:123] Gathering logs for kindnet [3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6] ...
	I0815 17:58:16.229259  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6"
	I0815 17:58:16.297535  498968 logs.go:123] Gathering logs for kindnet [7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97] ...
	I0815 17:58:16.297567  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97"
	I0815 17:58:16.383976  498968 logs.go:123] Gathering logs for storage-provisioner [ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95] ...
	I0815 17:58:16.384008  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95"
	I0815 17:58:16.438980  498968 logs.go:123] Gathering logs for container status ...
	I0815 17:58:16.439057  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:58:16.534317  498968 out.go:358] Setting ErrFile to fd 2...
	I0815 17:58:16.534483  498968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0815 17:58:16.534601  498968 out.go:270] X Problems detected in kubelet:
	W0815 17:58:16.534641  498968 out.go:270]   Aug 15 17:57:41 old-k8s-version-460705 kubelet[665]: E0815 17:57:41.262122     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:16.534866  498968 out.go:270]   Aug 15 17:57:53 old-k8s-version-460705 kubelet[665]: E0815 17:57:53.262019     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:16.534903  498968 out.go:270]   Aug 15 17:57:56 old-k8s-version-460705 kubelet[665]: E0815 17:57:56.268331     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:16.534977  498968 out.go:270]   Aug 15 17:58:04 old-k8s-version-460705 kubelet[665]: E0815 17:58:04.261881     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:16.535020  498968 out.go:270]   Aug 15 17:58:10 old-k8s-version-460705 kubelet[665]: E0815 17:58:10.261544     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	I0815 17:58:16.535049  498968 out.go:358] Setting ErrFile to fd 2...
	I0815 17:58:16.535079  498968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:58:19.351317  509696 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-918291:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 -I lz4 -xf /preloaded.tar -C /extractDir: (5.085183635s)
	I0815 17:58:19.351350  509696 kic.go:203] duration metric: took 5.085353177s to extract preloaded images to volume ...
	W0815 17:58:19.351497  509696 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0815 17:58:19.351620  509696 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0815 17:58:19.404961  509696 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-918291 --name embed-certs-918291 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-918291 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-918291 --network embed-certs-918291 --ip 192.168.85.2 --volume embed-certs-918291:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002
	I0815 17:58:19.736524  509696 cli_runner.go:164] Run: docker container inspect embed-certs-918291 --format={{.State.Running}}
	I0815 17:58:19.753616  509696 cli_runner.go:164] Run: docker container inspect embed-certs-918291 --format={{.State.Status}}
	I0815 17:58:19.777741  509696 cli_runner.go:164] Run: docker exec embed-certs-918291 stat /var/lib/dpkg/alternatives/iptables
	I0815 17:58:19.848668  509696 oci.go:144] the created container "embed-certs-918291" has a running status.
	I0815 17:58:19.848694  509696 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19450-292730/.minikube/machines/embed-certs-918291/id_rsa...
	I0815 17:58:20.545218  509696 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19450-292730/.minikube/machines/embed-certs-918291/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0815 17:58:20.586236  509696 cli_runner.go:164] Run: docker container inspect embed-certs-918291 --format={{.State.Status}}
	I0815 17:58:20.610520  509696 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0815 17:58:20.610540  509696 kic_runner.go:114] Args: [docker exec --privileged embed-certs-918291 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0815 17:58:20.697802  509696 cli_runner.go:164] Run: docker container inspect embed-certs-918291 --format={{.State.Status}}
	I0815 17:58:20.720661  509696 machine.go:93] provisionDockerMachine start ...
	I0815 17:58:20.720746  509696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-918291
	I0815 17:58:20.743918  509696 main.go:141] libmachine: Using SSH client type: native
	I0815 17:58:20.744194  509696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I0815 17:58:20.744203  509696 main.go:141] libmachine: About to run SSH command:
	hostname
	I0815 17:58:20.911314  509696 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-918291
	
	I0815 17:58:20.911405  509696 ubuntu.go:169] provisioning hostname "embed-certs-918291"
	I0815 17:58:20.911513  509696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-918291
	I0815 17:58:20.933880  509696 main.go:141] libmachine: Using SSH client type: native
	I0815 17:58:20.934141  509696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I0815 17:58:20.934161  509696 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-918291 && echo "embed-certs-918291" | sudo tee /etc/hostname
	I0815 17:58:21.101708  509696 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-918291
	
	I0815 17:58:21.101795  509696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-918291
	I0815 17:58:21.120803  509696 main.go:141] libmachine: Using SSH client type: native
	I0815 17:58:21.121090  509696 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I0815 17:58:21.121114  509696 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-918291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-918291/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-918291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0815 17:58:21.263059  509696 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0815 17:58:21.263101  509696 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19450-292730/.minikube CaCertPath:/home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19450-292730/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19450-292730/.minikube}
	I0815 17:58:21.263125  509696 ubuntu.go:177] setting up certificates
	I0815 17:58:21.263138  509696 provision.go:84] configureAuth start
	I0815 17:58:21.263208  509696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-918291
	I0815 17:58:21.286150  509696 provision.go:143] copyHostCerts
	I0815 17:58:21.286217  509696 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-292730/.minikube/key.pem, removing ...
	I0815 17:58:21.286231  509696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-292730/.minikube/key.pem
	I0815 17:58:21.286306  509696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19450-292730/.minikube/key.pem (1675 bytes)
	I0815 17:58:21.286588  509696 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-292730/.minikube/ca.pem, removing ...
	I0815 17:58:21.286606  509696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-292730/.minikube/ca.pem
	I0815 17:58:21.286644  509696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19450-292730/.minikube/ca.pem (1082 bytes)
	I0815 17:58:21.286735  509696 exec_runner.go:144] found /home/jenkins/minikube-integration/19450-292730/.minikube/cert.pem, removing ...
	I0815 17:58:21.286741  509696 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19450-292730/.minikube/cert.pem
	I0815 17:58:21.286766  509696 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19450-292730/.minikube/cert.pem (1123 bytes)
	I0815 17:58:21.286831  509696 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19450-292730/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca-key.pem org=jenkins.embed-certs-918291 san=[127.0.0.1 192.168.85.2 embed-certs-918291 localhost minikube]
	I0815 17:58:21.984541  509696 provision.go:177] copyRemoteCerts
	I0815 17:58:21.984608  509696 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0815 17:58:21.984653  509696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-918291
	I0815 17:58:22.003554  509696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/embed-certs-918291/id_rsa Username:docker}
	I0815 17:58:22.106685  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0815 17:58:22.131885  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0815 17:58:22.156885  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0815 17:58:22.182293  509696 provision.go:87] duration metric: took 919.132568ms to configureAuth
	I0815 17:58:22.182319  509696 ubuntu.go:193] setting minikube options for container-runtime
	I0815 17:58:22.182513  509696 config.go:182] Loaded profile config "embed-certs-918291": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 17:58:22.182521  509696 machine.go:96] duration metric: took 1.46184259s to provisionDockerMachine
	I0815 17:58:22.182528  509696 client.go:171] duration metric: took 8.722334444s to LocalClient.Create
	I0815 17:58:22.182548  509696 start.go:167] duration metric: took 8.7223997s to libmachine.API.Create "embed-certs-918291"
	I0815 17:58:22.182555  509696 start.go:293] postStartSetup for "embed-certs-918291" (driver="docker")
	I0815 17:58:22.182564  509696 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0815 17:58:22.182625  509696 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0815 17:58:22.182667  509696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-918291
	I0815 17:58:22.199743  509696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/embed-certs-918291/id_rsa Username:docker}
	I0815 17:58:22.302695  509696 ssh_runner.go:195] Run: cat /etc/os-release
	I0815 17:58:22.306193  509696 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0815 17:58:22.306227  509696 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0815 17:58:22.306238  509696 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0815 17:58:22.306257  509696 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0815 17:58:22.306267  509696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-292730/.minikube/addons for local assets ...
	I0815 17:58:22.306324  509696 filesync.go:126] Scanning /home/jenkins/minikube-integration/19450-292730/.minikube/files for local assets ...
	I0815 17:58:22.306412  509696 filesync.go:149] local asset: /home/jenkins/minikube-integration/19450-292730/.minikube/files/etc/ssl/certs/2981302.pem -> 2981302.pem in /etc/ssl/certs
	I0815 17:58:22.306516  509696 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0815 17:58:22.315166  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/files/etc/ssl/certs/2981302.pem --> /etc/ssl/certs/2981302.pem (1708 bytes)
	I0815 17:58:22.341756  509696 start.go:296] duration metric: took 159.184083ms for postStartSetup
	I0815 17:58:22.342136  509696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-918291
	I0815 17:58:22.359462  509696 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/config.json ...
	I0815 17:58:22.359762  509696 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:58:22.359815  509696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-918291
	I0815 17:58:22.377013  509696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/embed-certs-918291/id_rsa Username:docker}
	I0815 17:58:22.470397  509696 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0815 17:58:22.475457  509696 start.go:128] duration metric: took 9.017805843s to createHost
	I0815 17:58:22.475481  509696 start.go:83] releasing machines lock for "embed-certs-918291", held for 9.01795159s
	I0815 17:58:22.475555  509696 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-918291
	I0815 17:58:22.494927  509696 ssh_runner.go:195] Run: cat /version.json
	I0815 17:58:22.494988  509696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-918291
	I0815 17:58:22.495268  509696 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0815 17:58:22.495320  509696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-918291
	I0815 17:58:22.513976  509696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/embed-certs-918291/id_rsa Username:docker}
	I0815 17:58:22.514471  509696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/embed-certs-918291/id_rsa Username:docker}
	I0815 17:58:22.612745  509696 ssh_runner.go:195] Run: systemctl --version
	I0815 17:58:22.748663  509696 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0815 17:58:22.753295  509696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0815 17:58:22.779352  509696 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0815 17:58:22.779430  509696 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0815 17:58:22.814669  509696 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0815 17:58:22.814703  509696 start.go:495] detecting cgroup driver to use...
	I0815 17:58:22.814736  509696 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0815 17:58:22.814788  509696 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0815 17:58:22.827619  509696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0815 17:58:22.839574  509696 docker.go:217] disabling cri-docker service (if available) ...
	I0815 17:58:22.839642  509696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0815 17:58:22.854145  509696 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0815 17:58:22.868850  509696 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0815 17:58:22.973683  509696 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0815 17:58:23.102683  509696 docker.go:233] disabling docker service ...
	I0815 17:58:23.102764  509696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0815 17:58:23.130074  509696 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0815 17:58:23.142644  509696 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0815 17:58:23.227343  509696 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0815 17:58:23.324147  509696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0815 17:58:23.336930  509696 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0815 17:58:23.356679  509696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0815 17:58:23.369350  509696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0815 17:58:23.379977  509696 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0815 17:58:23.380092  509696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0815 17:58:23.391327  509696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 17:58:23.402544  509696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0815 17:58:23.412769  509696 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0815 17:58:23.423939  509696 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0815 17:58:23.433947  509696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0815 17:58:23.445156  509696 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0815 17:58:23.456943  509696 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0815 17:58:23.468288  509696 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0815 17:58:23.477747  509696 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0815 17:58:23.486926  509696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:58:23.586784  509696 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0815 17:58:23.725519  509696 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0815 17:58:23.725634  509696 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0815 17:58:23.729630  509696 start.go:563] Will wait 60s for crictl version
	I0815 17:58:23.729722  509696 ssh_runner.go:195] Run: which crictl
	I0815 17:58:23.733196  509696 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0815 17:58:23.772810  509696 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0815 17:58:23.772910  509696 ssh_runner.go:195] Run: containerd --version
	I0815 17:58:23.798630  509696 ssh_runner.go:195] Run: containerd --version
	I0815 17:58:23.822802  509696 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0815 17:58:23.824818  509696 cli_runner.go:164] Run: docker network inspect embed-certs-918291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0815 17:58:23.840436  509696 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0815 17:58:23.844236  509696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:58:23.855458  509696 kubeadm.go:883] updating cluster {Name:embed-certs-918291 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-918291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0815 17:58:23.855581  509696 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 17:58:23.855646  509696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:58:23.891581  509696 containerd.go:627] all images are preloaded for containerd runtime.
	I0815 17:58:23.891607  509696 containerd.go:534] Images already preloaded, skipping extraction
	I0815 17:58:23.891665  509696 ssh_runner.go:195] Run: sudo crictl images --output json
	I0815 17:58:23.928656  509696 containerd.go:627] all images are preloaded for containerd runtime.
	I0815 17:58:23.928683  509696 cache_images.go:84] Images are preloaded, skipping loading
	I0815 17:58:23.928692  509696 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.0 containerd true true} ...
	I0815 17:58:23.928814  509696 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-918291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:embed-certs-918291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0815 17:58:23.928890  509696 ssh_runner.go:195] Run: sudo crictl info
	I0815 17:58:23.969735  509696 cni.go:84] Creating CNI manager for ""
	I0815 17:58:23.969759  509696 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 17:58:23.969771  509696 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0815 17:58:23.969795  509696 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-918291 NodeName:embed-certs-918291 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0815 17:58:23.969934  509696 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-918291"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0815 17:58:23.970009  509696 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0815 17:58:23.978913  509696 binaries.go:44] Found k8s binaries, skipping transfer
	I0815 17:58:23.978986  509696 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0815 17:58:23.987813  509696 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0815 17:58:24.014419  509696 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0815 17:58:24.035984  509696 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0815 17:58:24.057019  509696 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0815 17:58:24.061001  509696 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0815 17:58:24.073053  509696 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0815 17:58:24.170527  509696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0815 17:58:24.187162  509696 certs.go:68] Setting up /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291 for IP: 192.168.85.2
	I0815 17:58:24.187184  509696 certs.go:194] generating shared ca certs ...
	I0815 17:58:24.187201  509696 certs.go:226] acquiring lock for ca certs: {Name:mkb4a15757b6ba038567496d15807eaae760a8a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:58:24.187333  509696 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19450-292730/.minikube/ca.key
	I0815 17:58:24.187381  509696 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.key
	I0815 17:58:24.187394  509696 certs.go:256] generating profile certs ...
	I0815 17:58:24.187449  509696 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/client.key
	I0815 17:58:24.187465  509696 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/client.crt with IP's: []
	I0815 17:58:25.180355  509696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/client.crt ...
	I0815 17:58:25.180389  509696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/client.crt: {Name:mk984eb07a5532ae40595742583adbed3ceac4e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:58:25.180989  509696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/client.key ...
	I0815 17:58:25.181007  509696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/client.key: {Name:mk8d377c6e5cdce4dcc3763aa1b3a200cc26f0e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:58:25.181450  509696 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/apiserver.key.eaf9c3e6
	I0815 17:58:25.181483  509696 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/apiserver.crt.eaf9c3e6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0815 17:58:26.462835  509696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/apiserver.crt.eaf9c3e6 ...
	I0815 17:58:26.462875  509696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/apiserver.crt.eaf9c3e6: {Name:mk101da3e7e2e06ca6abc30fe390ac402a42da35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:58:26.463669  509696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/apiserver.key.eaf9c3e6 ...
	I0815 17:58:26.463697  509696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/apiserver.key.eaf9c3e6: {Name:mk03c78e19897ec4d702c1e5bece03dcfff6fc23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:58:26.463811  509696 certs.go:381] copying /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/apiserver.crt.eaf9c3e6 -> /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/apiserver.crt
	I0815 17:58:26.463895  509696 certs.go:385] copying /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/apiserver.key.eaf9c3e6 -> /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/apiserver.key
	I0815 17:58:26.463970  509696 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/proxy-client.key
	I0815 17:58:26.463990  509696 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/proxy-client.crt with IP's: []
	I0815 17:58:26.858397  509696 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/proxy-client.crt ...
	I0815 17:58:26.858559  509696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/proxy-client.crt: {Name:mk2af9a4bae7fb7defacb75bd2d95526a402f6fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:58:26.858840  509696 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/proxy-client.key ...
	I0815 17:58:26.858885  509696 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/proxy-client.key: {Name:mkd41cbca89ad5517ffc57ee9502cfcc82d3c938 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:58:26.859668  509696 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/298130.pem (1338 bytes)
	W0815 17:58:26.859754  509696 certs.go:480] ignoring /home/jenkins/minikube-integration/19450-292730/.minikube/certs/298130_empty.pem, impossibly tiny 0 bytes
	I0815 17:58:26.859778  509696 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca-key.pem (1675 bytes)
	I0815 17:58:26.859840  509696 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/ca.pem (1082 bytes)
	I0815 17:58:26.859892  509696 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/cert.pem (1123 bytes)
	I0815 17:58:26.860001  509696 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/certs/key.pem (1675 bytes)
	I0815 17:58:26.860076  509696 certs.go:484] found cert: /home/jenkins/minikube-integration/19450-292730/.minikube/files/etc/ssl/certs/2981302.pem (1708 bytes)
	I0815 17:58:26.860705  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0815 17:58:26.889234  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0815 17:58:26.927353  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0815 17:58:26.956427  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0815 17:58:26.986975  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0815 17:58:27.019614  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0815 17:58:27.052963  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0815 17:58:27.083739  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/embed-certs-918291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0815 17:58:27.115751  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0815 17:58:27.150025  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/certs/298130.pem --> /usr/share/ca-certificates/298130.pem (1338 bytes)
	I0815 17:58:27.181561  509696 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19450-292730/.minikube/files/etc/ssl/certs/2981302.pem --> /usr/share/ca-certificates/2981302.pem (1708 bytes)
	I0815 17:58:27.213210  509696 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0815 17:58:27.234491  509696 ssh_runner.go:195] Run: openssl version
	I0815 17:58:27.240402  509696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2981302.pem && ln -fs /usr/share/ca-certificates/2981302.pem /etc/ssl/certs/2981302.pem"
	I0815 17:58:27.251030  509696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2981302.pem
	I0815 17:58:27.255113  509696 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 15 17:15 /usr/share/ca-certificates/2981302.pem
	I0815 17:58:27.255228  509696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2981302.pem
	I0815 17:58:27.268063  509696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2981302.pem /etc/ssl/certs/3ec20f2e.0"
	I0815 17:58:27.279349  509696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0815 17:58:27.305945  509696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:58:27.310720  509696 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:05 /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:58:27.310808  509696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0815 17:58:27.319618  509696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0815 17:58:27.334827  509696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298130.pem && ln -fs /usr/share/ca-certificates/298130.pem /etc/ssl/certs/298130.pem"
	I0815 17:58:27.348081  509696 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298130.pem
	I0815 17:58:27.360109  509696 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 15 17:15 /usr/share/ca-certificates/298130.pem
	I0815 17:58:27.360187  509696 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298130.pem
	I0815 17:58:27.385386  509696 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298130.pem /etc/ssl/certs/51391683.0"
	I0815 17:58:27.404422  509696 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0815 17:58:27.411653  509696 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0815 17:58:27.411742  509696 kubeadm.go:392] StartCluster: {Name:embed-certs-918291 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-918291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:58:27.411857  509696 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0815 17:58:27.411935  509696 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0815 17:58:27.480321  509696 cri.go:89] found id: ""
	I0815 17:58:27.480421  509696 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0815 17:58:27.492122  509696 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0815 17:58:27.502085  509696 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0815 17:58:27.502161  509696 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0815 17:58:27.513694  509696 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0815 17:58:27.513715  509696 kubeadm.go:157] found existing configuration files:
	
	I0815 17:58:27.513784  509696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0815 17:58:27.525508  509696 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0815 17:58:27.525590  509696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0815 17:58:27.534885  509696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0815 17:58:27.545322  509696 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0815 17:58:27.545386  509696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0815 17:58:27.556215  509696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0815 17:58:27.567984  509696 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0815 17:58:27.568116  509696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0815 17:58:27.581667  509696 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0815 17:58:27.593005  509696 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0815 17:58:27.593068  509696 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0815 17:58:27.603099  509696 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0815 17:58:27.663938  509696 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0815 17:58:27.665422  509696 kubeadm.go:310] [preflight] Running pre-flight checks
	I0815 17:58:27.698693  509696 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0815 17:58:27.698762  509696 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0815 17:58:27.698796  509696 kubeadm.go:310] OS: Linux
	I0815 17:58:27.698843  509696 kubeadm.go:310] CGROUPS_CPU: enabled
	I0815 17:58:27.698896  509696 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0815 17:58:27.698943  509696 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0815 17:58:27.698989  509696 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0815 17:58:27.699037  509696 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0815 17:58:27.699085  509696 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0815 17:58:27.699136  509696 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0815 17:58:27.699183  509696 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0815 17:58:27.699227  509696 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0815 17:58:27.793244  509696 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0815 17:58:27.793350  509696 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0815 17:58:27.793439  509696 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0815 17:58:27.812510  509696 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0815 17:58:27.815938  509696 out.go:235]   - Generating certificates and keys ...
	I0815 17:58:27.816629  509696 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0815 17:58:27.817533  509696 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0815 17:58:26.536094  498968 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:58:26.563627  498968 api_server.go:72] duration metric: took 5m55.298829172s to wait for apiserver process to appear ...
	I0815 17:58:26.563651  498968 api_server.go:88] waiting for apiserver healthz status ...
	I0815 17:58:26.563686  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0815 17:58:26.563751  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0815 17:58:26.696998  498968 cri.go:89] found id: "898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6"
	I0815 17:58:26.697018  498968 cri.go:89] found id: "66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438"
	I0815 17:58:26.697023  498968 cri.go:89] found id: ""
	I0815 17:58:26.697031  498968 logs.go:276] 2 containers: [898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6 66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438]
	I0815 17:58:26.697088  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.702831  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.707420  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0815 17:58:26.707483  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0815 17:58:26.781317  498968 cri.go:89] found id: "1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6"
	I0815 17:58:26.781340  498968 cri.go:89] found id: "5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197"
	I0815 17:58:26.781345  498968 cri.go:89] found id: ""
	I0815 17:58:26.781352  498968 logs.go:276] 2 containers: [1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6 5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197]
	I0815 17:58:26.781411  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.785343  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.789466  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0815 17:58:26.789529  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0815 17:58:26.847996  498968 cri.go:89] found id: "ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568"
	I0815 17:58:26.848015  498968 cri.go:89] found id: "bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047"
	I0815 17:58:26.848021  498968 cri.go:89] found id: ""
	I0815 17:58:26.848028  498968 logs.go:276] 2 containers: [ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568 bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047]
	I0815 17:58:26.848085  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.851974  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.857321  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0815 17:58:26.857386  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0815 17:58:26.911952  498968 cri.go:89] found id: "3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065"
	I0815 17:58:26.911972  498968 cri.go:89] found id: "27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666"
	I0815 17:58:26.911977  498968 cri.go:89] found id: ""
	I0815 17:58:26.911985  498968 logs.go:276] 2 containers: [3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065 27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666]
	I0815 17:58:26.912042  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.919333  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.923192  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0815 17:58:26.923260  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0815 17:58:26.976029  498968 cri.go:89] found id: "ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72"
	I0815 17:58:26.976050  498968 cri.go:89] found id: "755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef"
	I0815 17:58:26.976054  498968 cri.go:89] found id: ""
	I0815 17:58:26.976062  498968 logs.go:276] 2 containers: [ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72 755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef]
	I0815 17:58:26.976116  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.979965  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:26.983541  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0815 17:58:26.983605  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0815 17:58:27.038212  498968 cri.go:89] found id: "c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791"
	I0815 17:58:27.038232  498968 cri.go:89] found id: "cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f"
	I0815 17:58:27.038237  498968 cri.go:89] found id: ""
	I0815 17:58:27.038245  498968 logs.go:276] 2 containers: [c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791 cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f]
	I0815 17:58:27.038302  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.043119  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.047320  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0815 17:58:27.047450  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0815 17:58:27.103276  498968 cri.go:89] found id: "7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97"
	I0815 17:58:27.103354  498968 cri.go:89] found id: "3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6"
	I0815 17:58:27.103376  498968 cri.go:89] found id: ""
	I0815 17:58:27.103396  498968 logs.go:276] 2 containers: [7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97 3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6]
	I0815 17:58:27.103477  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.107949  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.112301  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0815 17:58:27.112424  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0815 17:58:27.166321  498968 cri.go:89] found id: "ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95"
	I0815 17:58:27.166400  498968 cri.go:89] found id: "03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47"
	I0815 17:58:27.166421  498968 cri.go:89] found id: ""
	I0815 17:58:27.166441  498968 logs.go:276] 2 containers: [ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95 03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47]
	I0815 17:58:27.166531  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.170741  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.174840  498968 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0815 17:58:27.174961  498968 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0815 17:58:27.260438  498968 cri.go:89] found id: "e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364"
	I0815 17:58:27.260516  498968 cri.go:89] found id: ""
	I0815 17:58:27.260539  498968 logs.go:276] 1 containers: [e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364]
	I0815 17:58:27.260620  498968 ssh_runner.go:195] Run: which crictl
	I0815 17:58:27.281246  498968 logs.go:123] Gathering logs for kube-apiserver [898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6] ...
	I0815 17:58:27.281314  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6"
	I0815 17:58:27.382222  498968 logs.go:123] Gathering logs for coredns [bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047] ...
	I0815 17:58:27.382334  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047"
	I0815 17:58:27.443852  498968 logs.go:123] Gathering logs for kubelet ...
	I0815 17:58:27.443880  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0815 17:58:27.508864  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.368840     665 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.509913  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.368954     665 reflector.go:138] object-"kube-system"/"kube-proxy-token-gftlr": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gftlr" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.510224  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.369034     665 reflector.go:138] object-"kube-system"/"kindnet-token-mbwt5": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-mbwt5" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.510502  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.369107     665 reflector.go:138] object-"kube-system"/"coredns-token-2p8pb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-2p8pb" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.510775  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.373149     665 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.511034  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.379215     665 reflector.go:138] object-"default"/"default-token-wlhtd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-wlhtd" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.511347  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.379383     665 reflector.go:138] object-"kube-system"/"storage-provisioner-token-2zctk": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-2zctk" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.515568  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:50 old-k8s-version-460705 kubelet[665]: E0815 17:52:50.450844     665 reflector.go:138] object-"kube-system"/"metrics-server-token-fcq8q": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-fcq8q" is forbidden: User "system:node:old-k8s-version-460705" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-460705' and this object
	W0815 17:58:27.524365  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:54 old-k8s-version-460705 kubelet[665]: E0815 17:52:54.440249     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:27.525040  498968 logs.go:138] Found kubelet problem: Aug 15 17:52:54 old-k8s-version-460705 kubelet[665]: E0815 17:52:54.592846     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.527983  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:06 old-k8s-version-460705 kubelet[665]: E0815 17:53:06.270605     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:27.530125  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:16 old-k8s-version-460705 kubelet[665]: E0815 17:53:16.691327     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.530498  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:17 old-k8s-version-460705 kubelet[665]: E0815 17:53:17.695728     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.531051  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:20 old-k8s-version-460705 kubelet[665]: E0815 17:53:20.261914     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.531531  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:22 old-k8s-version-460705 kubelet[665]: E0815 17:53:22.712000     665 pod_workers.go:191] Error syncing pod 821fca20-3432-4c38-b3e8-fdeef57602be ("storage-provisioner_kube-system(821fca20-3432-4c38-b3e8-fdeef57602be)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(821fca20-3432-4c38-b3e8-fdeef57602be)"
	W0815 17:58:27.531897  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:24 old-k8s-version-460705 kubelet[665]: E0815 17:53:24.328211     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.536011  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:34 old-k8s-version-460705 kubelet[665]: E0815 17:53:34.270080     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:27.536704  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:38 old-k8s-version-460705 kubelet[665]: E0815 17:53:38.762716     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.537093  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:44 old-k8s-version-460705 kubelet[665]: E0815 17:53:44.328261     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.537369  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:49 old-k8s-version-460705 kubelet[665]: E0815 17:53:49.269748     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.537779  498968 logs.go:138] Found kubelet problem: Aug 15 17:53:58 old-k8s-version-460705 kubelet[665]: E0815 17:53:58.261622     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.538015  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:03 old-k8s-version-460705 kubelet[665]: E0815 17:54:03.262372     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.538689  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:10 old-k8s-version-460705 kubelet[665]: E0815 17:54:10.861689     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.539111  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:14 old-k8s-version-460705 kubelet[665]: E0815 17:54:14.329009     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.541883  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:15 old-k8s-version-460705 kubelet[665]: E0815 17:54:15.288209     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:27.542249  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:25 old-k8s-version-460705 kubelet[665]: E0815 17:54:25.261679     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.542452  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:28 old-k8s-version-460705 kubelet[665]: E0815 17:54:28.261869     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.542855  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:38 old-k8s-version-460705 kubelet[665]: E0815 17:54:38.261576     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.543057  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:43 old-k8s-version-460705 kubelet[665]: E0815 17:54:43.262643     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.543413  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:50 old-k8s-version-460705 kubelet[665]: E0815 17:54:50.261885     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.543608  498968 logs.go:138] Found kubelet problem: Aug 15 17:54:56 old-k8s-version-460705 kubelet[665]: E0815 17:54:56.261870     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.544268  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:01 old-k8s-version-460705 kubelet[665]: E0815 17:55:01.990218     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.544656  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:04 old-k8s-version-460705 kubelet[665]: E0815 17:55:04.328215     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.544855  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:08 old-k8s-version-460705 kubelet[665]: E0815 17:55:08.262273     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.545645  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:18 old-k8s-version-460705 kubelet[665]: E0815 17:55:18.261614     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.545913  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:19 old-k8s-version-460705 kubelet[665]: E0815 17:55:19.265350     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.546294  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:31 old-k8s-version-460705 kubelet[665]: E0815 17:55:31.265265     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.546521  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:34 old-k8s-version-460705 kubelet[665]: E0815 17:55:34.261857     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.546904  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:44 old-k8s-version-460705 kubelet[665]: E0815 17:55:44.261502     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.549708  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:47 old-k8s-version-460705 kubelet[665]: E0815 17:55:47.276854     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0815 17:58:27.550131  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:58 old-k8s-version-460705 kubelet[665]: E0815 17:55:58.261567     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.550378  498968 logs.go:138] Found kubelet problem: Aug 15 17:55:58 old-k8s-version-460705 kubelet[665]: E0815 17:55:58.262407     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.550761  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:10 old-k8s-version-460705 kubelet[665]: E0815 17:56:10.262030     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.550999  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:13 old-k8s-version-460705 kubelet[665]: E0815 17:56:13.264357     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.551380  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:21 old-k8s-version-460705 kubelet[665]: E0815 17:56:21.261686     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.551614  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:26 old-k8s-version-460705 kubelet[665]: E0815 17:56:26.261880     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.552280  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:33 old-k8s-version-460705 kubelet[665]: E0815 17:56:33.252733     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.552652  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:34 old-k8s-version-460705 kubelet[665]: E0815 17:56:34.328819     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.552875  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:37 old-k8s-version-460705 kubelet[665]: E0815 17:56:37.262397     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.553282  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:45 old-k8s-version-460705 kubelet[665]: E0815 17:56:45.262262     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.553505  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:48 old-k8s-version-460705 kubelet[665]: E0815 17:56:48.262102     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.553895  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:59 old-k8s-version-460705 kubelet[665]: E0815 17:56:59.265417     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.554117  498968 logs.go:138] Found kubelet problem: Aug 15 17:56:59 old-k8s-version-460705 kubelet[665]: E0815 17:56:59.266290     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.555099  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:11 old-k8s-version-460705 kubelet[665]: E0815 17:57:11.264783     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.555524  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:14 old-k8s-version-460705 kubelet[665]: E0815 17:57:14.262035     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.555737  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:25 old-k8s-version-460705 kubelet[665]: E0815 17:57:25.262627     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.556094  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:28 old-k8s-version-460705 kubelet[665]: E0815 17:57:28.261540     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.556308  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:38 old-k8s-version-460705 kubelet[665]: E0815 17:57:38.261980     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.556739  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:41 old-k8s-version-460705 kubelet[665]: E0815 17:57:41.262122     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.556976  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:53 old-k8s-version-460705 kubelet[665]: E0815 17:57:53.262019     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.557358  498968 logs.go:138] Found kubelet problem: Aug 15 17:57:56 old-k8s-version-460705 kubelet[665]: E0815 17:57:56.268331     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.557579  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:04 old-k8s-version-460705 kubelet[665]: E0815 17:58:04.261881     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.557962  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:10 old-k8s-version-460705 kubelet[665]: E0815 17:58:10.261544     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.558215  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:15 old-k8s-version-460705 kubelet[665]: E0815 17:58:15.265484     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:27.558592  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:24 old-k8s-version-460705 kubelet[665]: E0815 17:58:24.261897     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:27.558906  498968 logs.go:138] Found kubelet problem: Aug 15 17:58:26 old-k8s-version-460705 kubelet[665]: E0815 17:58:26.262340     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0815 17:58:27.558923  498968 logs.go:123] Gathering logs for kube-scheduler [3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065] ...
	I0815 17:58:27.558948  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065"
	I0815 17:58:27.612976  498968 logs.go:123] Gathering logs for kube-proxy [755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef] ...
	I0815 17:58:27.613004  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef"
	I0815 17:58:27.670623  498968 logs.go:123] Gathering logs for storage-provisioner [ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95] ...
	I0815 17:58:27.670648  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95"
	I0815 17:58:27.737187  498968 logs.go:123] Gathering logs for kubernetes-dashboard [e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364] ...
	I0815 17:58:27.737215  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364"
	I0815 17:58:27.795544  498968 logs.go:123] Gathering logs for dmesg ...
	I0815 17:58:27.795570  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0815 17:58:27.814027  498968 logs.go:123] Gathering logs for describe nodes ...
	I0815 17:58:27.814054  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0815 17:58:27.991678  498968 logs.go:123] Gathering logs for kube-apiserver [66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438] ...
	I0815 17:58:27.991756  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438"
	I0815 17:58:28.062006  498968 logs.go:123] Gathering logs for etcd [1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6] ...
	I0815 17:58:28.062049  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6"
	I0815 17:58:28.126014  498968 logs.go:123] Gathering logs for kube-scheduler [27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666] ...
	I0815 17:58:28.126047  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666"
	I0815 17:58:28.183679  498968 logs.go:123] Gathering logs for kube-controller-manager [cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f] ...
	I0815 17:58:28.183712  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f"
	I0815 17:58:28.277954  498968 logs.go:123] Gathering logs for kindnet [7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97] ...
	I0815 17:58:28.277989  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97"
	I0815 17:58:28.370222  498968 logs.go:123] Gathering logs for storage-provisioner [03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47] ...
	I0815 17:58:28.370260  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47"
	I0815 17:58:28.141610  509696 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0815 17:58:28.820835  509696 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0815 17:58:29.063929  509696 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0815 17:58:29.367311  509696 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0815 17:58:29.863825  509696 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0815 17:58:29.863985  509696 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-918291 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0815 17:58:30.386598  509696 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0815 17:58:30.386747  509696 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-918291 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0815 17:58:31.203491  509696 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0815 17:58:31.495657  509696 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0815 17:58:31.775882  509696 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0815 17:58:31.776201  509696 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0815 17:58:31.932560  509696 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0815 17:58:33.014486  509696 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0815 17:58:28.438754  498968 logs.go:123] Gathering logs for etcd [5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197] ...
	I0815 17:58:28.438786  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197"
	I0815 17:58:28.489041  498968 logs.go:123] Gathering logs for coredns [ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568] ...
	I0815 17:58:28.489071  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568"
	I0815 17:58:28.561899  498968 logs.go:123] Gathering logs for kube-proxy [ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72] ...
	I0815 17:58:28.561931  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72"
	I0815 17:58:28.610927  498968 logs.go:123] Gathering logs for kube-controller-manager [c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791] ...
	I0815 17:58:28.610963  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791"
	I0815 17:58:28.715451  498968 logs.go:123] Gathering logs for kindnet [3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6] ...
	I0815 17:58:28.715487  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6"
	I0815 17:58:28.790001  498968 logs.go:123] Gathering logs for containerd ...
	I0815 17:58:28.790036  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0815 17:58:28.859683  498968 logs.go:123] Gathering logs for container status ...
	I0815 17:58:28.859723  498968 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0815 17:58:28.935572  498968 out.go:358] Setting ErrFile to fd 2...
	I0815 17:58:28.935598  498968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0815 17:58:28.935690  498968 out.go:270] X Problems detected in kubelet:
	W0815 17:58:28.935709  498968 out.go:270]   Aug 15 17:58:04 old-k8s-version-460705 kubelet[665]: E0815 17:58:04.261881     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:28.935715  498968 out.go:270]   Aug 15 17:58:10 old-k8s-version-460705 kubelet[665]: E0815 17:58:10.261544     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:28.935903  498968 out.go:270]   Aug 15 17:58:15 old-k8s-version-460705 kubelet[665]: E0815 17:58:15.265484     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0815 17:58:28.935919  498968 out.go:270]   Aug 15 17:58:24 old-k8s-version-460705 kubelet[665]: E0815 17:58:24.261897     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	W0815 17:58:28.935935  498968 out.go:270]   Aug 15 17:58:26 old-k8s-version-460705 kubelet[665]: E0815 17:58:26.262340     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0815 17:58:28.935941  498968 out.go:358] Setting ErrFile to fd 2...
	I0815 17:58:28.935949  498968 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:58:33.431826  509696 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0815 17:58:33.995907  509696 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0815 17:58:34.317346  509696 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0815 17:58:34.317457  509696 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0815 17:58:34.321184  509696 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0815 17:58:34.323848  509696 out.go:235]   - Booting up control plane ...
	I0815 17:58:34.323977  509696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0815 17:58:34.324087  509696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0815 17:58:34.325658  509696 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0815 17:58:34.342440  509696 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0815 17:58:34.349746  509696 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0815 17:58:34.349958  509696 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0815 17:58:34.464884  509696 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0815 17:58:34.465012  509696 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0815 17:58:35.965360  509696 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501466167s
	I0815 17:58:35.965451  509696 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0815 17:58:38.937241  498968 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0815 17:58:38.947104  498968 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0815 17:58:38.949298  498968 out.go:201] 
	W0815 17:58:38.951695  498968 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0815 17:58:38.951736  498968 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0815 17:58:38.951759  498968 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0815 17:58:38.951774  498968 out.go:270] * 
	W0815 17:58:38.952821  498968 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0815 17:58:38.957549  498968 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	f8767917ba9bd       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   e7bda0116747f       dashboard-metrics-scraper-8d5bb5db8-bjqpx
	ff61a35c85dd2       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         3                   52da4e08969ef       storage-provisioner
	e83d89c3f1203       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   6fecfb88e6411       kubernetes-dashboard-cd95d586-55s8p
	7e0636e56889f       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   0740a491c4fa4       busybox
	ead5a4eaa534d       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   e5822dbd8e8b0       coredns-74ff55c5b-2w5d2
	ef1b7c6b063f2       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   58ebf932d0c5c       kube-proxy-q8bzk
	03fba565862a7       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         2                   52da4e08969ef       storage-provisioner
	7f64c4cccb043       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   1ce5cae3cabc4       kindnet-pv6wf
	898a913aaf79f       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   5c5cc42367b1c       kube-apiserver-old-k8s-version-460705
	3cc1b8ca6d69b       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   55d2781403371       kube-scheduler-old-k8s-version-460705
	c9c42776e06ec       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   caf7b5e60608e       kube-controller-manager-old-k8s-version-460705
	1ade17af7015e       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   80905a87fb9d8       etcd-old-k8s-version-460705
	a287eed95637e       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   3023e557224c3       busybox
	bdf9a2adb56be       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   58bf3ee0a2f0e       coredns-74ff55c5b-2w5d2
	3db7dd67f888f       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   b1f833ce9b90f       kindnet-pv6wf
	755f2b704fffd       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   92f641a5dcfec       kube-proxy-q8bzk
	27a9247e670f9       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   c505ce4524a77       kube-scheduler-old-k8s-version-460705
	66d304bff9be9       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   c9b80f976fb13       kube-apiserver-old-k8s-version-460705
	cdf9ab1382b1c       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   5b6675d2c9e48       kube-controller-manager-old-k8s-version-460705
	5a4f3c7918ea8       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   71696e4c2dce0       etcd-old-k8s-version-460705
	
	
	==> containerd <==
	Aug 15 17:55:01 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:55:01.294493343Z" level=info msg="CreateContainer within sandbox \"e7bda0116747fa5c5e6ebc489950acd70d1d2ddc8b4a531973d781b1a2588743\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"d9b102382ee581f299b9882f61bd54a5db0253cd81577d7160d6e6548f5b254e\""
	Aug 15 17:55:01 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:55:01.300029678Z" level=info msg="StartContainer for \"d9b102382ee581f299b9882f61bd54a5db0253cd81577d7160d6e6548f5b254e\""
	Aug 15 17:55:01 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:55:01.375792742Z" level=info msg="StartContainer for \"d9b102382ee581f299b9882f61bd54a5db0253cd81577d7160d6e6548f5b254e\" returns successfully"
	Aug 15 17:55:01 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:55:01.419419757Z" level=info msg="shim disconnected" id=d9b102382ee581f299b9882f61bd54a5db0253cd81577d7160d6e6548f5b254e namespace=k8s.io
	Aug 15 17:55:01 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:55:01.419484528Z" level=warning msg="cleaning up after shim disconnected" id=d9b102382ee581f299b9882f61bd54a5db0253cd81577d7160d6e6548f5b254e namespace=k8s.io
	Aug 15 17:55:01 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:55:01.419496138Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 15 17:55:01 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:55:01.996793094Z" level=info msg="RemoveContainer for \"e5de2e87a0c3912620018d724d7d3531090f57de24f182b652900123a63e4fe7\""
	Aug 15 17:55:02 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:55:02.002453457Z" level=info msg="RemoveContainer for \"e5de2e87a0c3912620018d724d7d3531090f57de24f182b652900123a63e4fe7\" returns successfully"
	Aug 15 17:55:47 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:55:47.262553653Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 17:55:47 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:55:47.273998251Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Aug 15 17:55:47 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:55:47.276241559Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 15 17:55:47 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:55:47.276305197Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 15 17:56:32 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:56:32.264202641Z" level=info msg="CreateContainer within sandbox \"e7bda0116747fa5c5e6ebc489950acd70d1d2ddc8b4a531973d781b1a2588743\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Aug 15 17:56:32 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:56:32.291842442Z" level=info msg="CreateContainer within sandbox \"e7bda0116747fa5c5e6ebc489950acd70d1d2ddc8b4a531973d781b1a2588743\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"f8767917ba9bd125b6308e096eb46c6b5dd903f034c914f296e062543760bdb1\""
	Aug 15 17:56:32 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:56:32.292566362Z" level=info msg="StartContainer for \"f8767917ba9bd125b6308e096eb46c6b5dd903f034c914f296e062543760bdb1\""
	Aug 15 17:56:32 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:56:32.354104668Z" level=info msg="StartContainer for \"f8767917ba9bd125b6308e096eb46c6b5dd903f034c914f296e062543760bdb1\" returns successfully"
	Aug 15 17:56:32 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:56:32.377799400Z" level=info msg="shim disconnected" id=f8767917ba9bd125b6308e096eb46c6b5dd903f034c914f296e062543760bdb1 namespace=k8s.io
	Aug 15 17:56:32 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:56:32.377875937Z" level=warning msg="cleaning up after shim disconnected" id=f8767917ba9bd125b6308e096eb46c6b5dd903f034c914f296e062543760bdb1 namespace=k8s.io
	Aug 15 17:56:32 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:56:32.377888401Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 15 17:56:33 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:56:33.253762864Z" level=info msg="RemoveContainer for \"d9b102382ee581f299b9882f61bd54a5db0253cd81577d7160d6e6548f5b254e\""
	Aug 15 17:56:33 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:56:33.261327959Z" level=info msg="RemoveContainer for \"d9b102382ee581f299b9882f61bd54a5db0253cd81577d7160d6e6548f5b254e\" returns successfully"
	Aug 15 17:58:37 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:58:37.262241687Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 17:58:37 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:58:37.304908736Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Aug 15 17:58:37 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:58:37.306797181Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 15 17:58:37 old-k8s-version-460705 containerd[570]: time="2024-08-15T17:58:37.306910806Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [bdf9a2adb56be23327153d64ad0c9dc38a35150150582beabe746d52b4c0b047] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:37583 - 56421 "HINFO IN 3658028351442965945.7113155991296006314. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03325371s
	
	
	==> coredns [ead5a4eaa534d5e7804cd8b6dbade0d16ba8cfa7ff0c6e3e566623d780d7e568] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:42094 - 52203 "HINFO IN 2726528436191269335.3975961540545807473. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048914515s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-460705
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-460705
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=af53cdc78a0e70966940b8c61b099aa639786ac7
	                    minikube.k8s.io/name=old-k8s-version-460705
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_15T17_49_59_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 15 Aug 2024 17:49:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-460705
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 15 Aug 2024 17:58:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 15 Aug 2024 17:53:40 +0000   Thu, 15 Aug 2024 17:49:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 15 Aug 2024 17:53:40 +0000   Thu, 15 Aug 2024 17:49:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 15 Aug 2024 17:53:40 +0000   Thu, 15 Aug 2024 17:49:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 15 Aug 2024 17:53:40 +0000   Thu, 15 Aug 2024 17:50:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-460705
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 c25b25b4edb246cebb2f7791bf6a396d
	  System UUID:                1c1a1c9c-a5fe-45bc-a714-5b9545b6f0f3
	  Boot ID:                    b8353367-6c23-495b-9e1b-e1ab13f1b466
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 coredns-74ff55c5b-2w5d2                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m27s
	  kube-system                 etcd-old-k8s-version-460705                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m34s
	  kube-system                 kindnet-pv6wf                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m27s
	  kube-system                 kube-apiserver-old-k8s-version-460705             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-controller-manager-old-k8s-version-460705    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-proxy-q8bzk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 kube-scheduler-old-k8s-version-460705             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 metrics-server-9975d5f86-wd4q2                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m31s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-bjqpx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-55s8p               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m53s (x5 over 8m53s)  kubelet     Node old-k8s-version-460705 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m53s (x4 over 8m53s)  kubelet     Node old-k8s-version-460705 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m53s (x5 over 8m53s)  kubelet     Node old-k8s-version-460705 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m34s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m34s                  kubelet     Node old-k8s-version-460705 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s                  kubelet     Node old-k8s-version-460705 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s                  kubelet     Node old-k8s-version-460705 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m27s                  kubelet     Node old-k8s-version-460705 status is now: NodeReady
	  Normal  Starting                 8m25s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m2s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m2s (x9 over 6m2s)    kubelet     Node old-k8s-version-460705 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m2s (x7 over 6m2s)    kubelet     Node old-k8s-version-460705 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m2s (x7 over 6m2s)    kubelet     Node old-k8s-version-460705 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m2s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m48s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Aug15 16:08] hrtimer: interrupt took 36893779 ns
	[Aug15 16:09] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [1ade17af7015e9fdf2f2fa93461b33518c431f63dc08b0dfaf8a75cb3c3da2c6] <==
	2024-08-15 17:54:36.014216 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:54:46.013619 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:54:56.014592 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:55:06.013798 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:55:16.013627 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:55:26.013625 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:55:36.013859 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:55:46.013602 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:55:56.013553 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:56:06.013776 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:56:16.013537 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:56:26.013647 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:56:36.015995 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:56:46.013689 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:56:56.013661 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:57:06.013797 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:57:16.014392 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:57:26.013622 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:57:36.015138 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:57:46.013840 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:57:56.013627 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:58:06.015379 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:58:16.013940 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:58:26.013609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:58:36.013853 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [5a4f3c7918ea8eedb09412c572426c6f17a04a489e6d7ff85501326f1f1d5197] <==
	raft2024/08/15 17:49:49 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/08/15 17:49:49 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/08/15 17:49:49 INFO: ea7e25599daad906 became leader at term 2
	raft2024/08/15 17:49:49 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-08-15 17:49:49.105780 I | etcdserver: published {Name:old-k8s-version-460705 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-08-15 17:49:49.106453 I | embed: ready to serve client requests
	2024-08-15 17:49:49.107436 I | embed: ready to serve client requests
	2024-08-15 17:49:49.110294 I | embed: serving client requests on 127.0.0.1:2379
	2024-08-15 17:49:49.127715 I | etcdserver: setting up the initial cluster version to 3.4
	2024-08-15 17:49:49.138074 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-08-15 17:49:49.138168 I | etcdserver/api: enabled capabilities for version 3.4
	2024-08-15 17:49:49.138457 I | embed: serving client requests on 192.168.76.2:2379
	2024-08-15 17:50:11.767938 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:50:19.736025 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:50:29.735936 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:50:39.735954 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:50:49.735778 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:50:59.735777 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:51:09.735934 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:51:19.735792 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:51:29.735823 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:51:39.735842 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:51:49.736018 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:51:59.736306 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-15 17:52:09.735807 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 17:58:41 up  2:41,  0 users,  load average: 1.97, 2.02, 2.45
	Linux old-k8s-version-460705 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [3db7dd67f888feca0b2276c9323ee5b16672dc355bbf917a0d0b7e7aced93bf6] <==
	I0815 17:50:57.718054       1 main.go:299] handling current node
	W0815 17:51:00.808132       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 17:51:00.808177       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 17:51:07.718276       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:51:07.718313       1 main.go:299] handling current node
	I0815 17:51:17.718010       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:51:17.718047       1 main.go:299] handling current node
	W0815 17:51:18.454791       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 17:51:18.454827       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 17:51:27.717989       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:51:27.718025       1 main.go:299] handling current node
	I0815 17:51:37.717667       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:51:37.717700       1 main.go:299] handling current node
	W0815 17:51:39.013038       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:51:39.013078       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 17:51:47.718042       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:51:47.718085       1 main.go:299] handling current node
	W0815 17:51:50.636163       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 17:51:50.636222       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0815 17:51:56.729812       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 17:51:56.729847       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 17:51:57.718408       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:51:57.718444       1 main.go:299] handling current node
	I0815 17:52:07.717691       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:52:07.717722       1 main.go:299] handling current node
	
	
	==> kindnet [7f64c4cccb043a6bfc333a26044aa1eefe40737e31827b25e04b6016024a4e97] <==
	E0815 17:57:21.511157       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 17:57:22.718150       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:57:22.718198       1 main.go:299] handling current node
	W0815 17:57:31.223612       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:57:31.223743       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 17:57:32.717752       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:57:32.717791       1 main.go:299] handling current node
	I0815 17:57:42.717940       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:57:42.717974       1 main.go:299] handling current node
	I0815 17:57:52.717736       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:57:52.717829       1 main.go:299] handling current node
	I0815 17:58:02.717983       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:58:02.718023       1 main.go:299] handling current node
	W0815 17:58:08.912818       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0815 17:58:08.912857       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0815 17:58:12.717984       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:58:12.718024       1 main.go:299] handling current node
	W0815 17:58:20.907480       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0815 17:58:20.907516       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0815 17:58:22.717797       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:58:22.717839       1 main.go:299] handling current node
	W0815 17:58:28.549150       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:58:28.549195       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0815 17:58:32.722986       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0815 17:58:32.723031       1 main.go:299] handling current node
	
	
	==> kube-apiserver [66d304bff9be9ac00144069b8d188304a4099364071c9c78689167380142d438] <==
	I0815 17:49:56.486716       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0815 17:49:56.518936       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0815 17:49:56.523305       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0815 17:49:56.523328       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0815 17:49:56.967293       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0815 17:49:57.017726       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0815 17:49:57.155182       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0815 17:49:57.156448       1 controller.go:606] quota admission added evaluator for: endpoints
	I0815 17:49:57.161759       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0815 17:49:58.098031       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0815 17:49:58.848365       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0815 17:49:58.958049       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0815 17:50:07.270458       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0815 17:50:14.090234       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0815 17:50:14.120840       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0815 17:50:26.851512       1 client.go:360] parsed scheme: "passthrough"
	I0815 17:50:26.851569       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 17:50:26.851578       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0815 17:51:10.563251       1 client.go:360] parsed scheme: "passthrough"
	I0815 17:51:10.563299       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 17:51:10.563307       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0815 17:51:46.973923       1 client.go:360] parsed scheme: "passthrough"
	I0815 17:51:46.974169       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 17:51:46.974288       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0815 17:52:10.549665       1 available_controller.go:508] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-apiserver [898a913aaf79f8a83845b59e64c3e335bf1a461e0cc96cfc3055b3245157a6a6] <==
	I0815 17:55:09.154720       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 17:55:09.154753       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0815 17:55:45.547061       1 client.go:360] parsed scheme: "passthrough"
	I0815 17:55:45.547107       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 17:55:45.547116       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0815 17:55:53.462470       1 handler_proxy.go:102] no RequestInfo found in the context
	E0815 17:55:53.462697       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0815 17:55:53.462712       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 17:56:22.738152       1 client.go:360] parsed scheme: "passthrough"
	I0815 17:56:22.738344       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 17:56:22.738386       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0815 17:56:54.985801       1 client.go:360] parsed scheme: "passthrough"
	I0815 17:56:54.985849       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 17:56:54.985858       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0815 17:57:38.326357       1 client.go:360] parsed scheme: "passthrough"
	I0815 17:57:38.326422       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 17:57:38.326552       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0815 17:57:51.558545       1 handler_proxy.go:102] no RequestInfo found in the context
	E0815 17:57:51.558679       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0815 17:57:51.558710       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0815 17:58:19.836155       1 client.go:360] parsed scheme: "passthrough"
	I0815 17:58:19.836544       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0815 17:58:19.836592       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [c9c42776e06ec90a02b4e2a48b940084be69b82aa9d01f24118b3d0cacbfd791] <==
	W0815 17:54:15.014702       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 17:54:41.056125       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 17:54:46.665461       1 request.go:655] Throttling request took 1.04838584s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0815 17:54:47.516897       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 17:55:11.558082       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 17:55:19.167527       1 request.go:655] Throttling request took 1.048258022s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0815 17:55:20.024914       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 17:55:42.089567       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 17:55:51.675395       1 request.go:655] Throttling request took 1.048427441s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W0815 17:55:52.526944       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 17:56:12.591475       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 17:56:24.177680       1 request.go:655] Throttling request took 1.048190856s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0815 17:56:25.029381       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 17:56:43.093303       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 17:56:56.680031       1 request.go:655] Throttling request took 1.048479785s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0815 17:56:57.531394       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 17:57:13.595129       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 17:57:29.181845       1 request.go:655] Throttling request took 1.048162636s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0815 17:57:30.038443       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 17:57:44.097109       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 17:58:01.690889       1 request.go:655] Throttling request took 1.048190881s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0815 17:58:02.542671       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0815 17:58:14.598975       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0815 17:58:34.193254       1 request.go:655] Throttling request took 1.048398842s, request: GET:https://192.168.76.2:8443/apis/autoscaling/v1?timeout=32s
	W0815 17:58:35.044954       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [cdf9ab1382b1c799e2431a4f001532965800ee9b36986f0ccf7c8b145271747f] <==
	I0815 17:50:14.166642       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0815 17:50:14.178197       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-q8bzk"
	I0815 17:50:14.206044       1 shared_informer.go:247] Caches are synced for attach detach 
	I0815 17:50:14.228402       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-rhqgd"
	I0815 17:50:14.230004       1 event.go:291] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-460705" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0815 17:50:14.243201       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
	I0815 17:50:14.255199       1 shared_informer.go:247] Caches are synced for disruption 
	I0815 17:50:14.255228       1 disruption.go:339] Sending events to api server.
	I0815 17:50:14.270086       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client 
	I0815 17:50:14.274313       1 shared_informer.go:247] Caches are synced for resource quota 
	I0815 17:50:14.270662       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
	I0815 17:50:14.274727       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pv6wf"
	I0815 17:50:14.270686       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
	I0815 17:50:14.270886       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown 
	I0815 17:50:14.275432       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-2w5d2"
	I0815 17:50:14.280272       1 shared_informer.go:247] Caches are synced for stateful set 
	I0815 17:50:14.296391       1 shared_informer.go:247] Caches are synced for resource quota 
	I0815 17:50:14.460429       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0815 17:50:14.743918       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0815 17:50:14.743950       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0815 17:50:14.768770       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0815 17:50:15.409257       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0815 17:50:15.437813       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-rhqgd"
	I0815 17:50:19.164749       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0815 17:52:09.890638       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	
	
	==> kube-proxy [755f2b704fffdd8d9b23d12ec7956bb10fbb9877ab34898d14e3a3adb72835ef] <==
	I0815 17:50:16.545267       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0815 17:50:16.545430       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0815 17:50:16.564606       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0815 17:50:16.564748       1 server_others.go:185] Using iptables Proxier.
	I0815 17:50:16.564980       1 server.go:650] Version: v1.20.0
	I0815 17:50:16.565988       1 config.go:315] Starting service config controller
	I0815 17:50:16.565997       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0815 17:50:16.566012       1 config.go:224] Starting endpoint slice config controller
	I0815 17:50:16.566016       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0815 17:50:16.666119       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0815 17:50:16.666199       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [ef1b7c6b063f2cd961f59b1d0714af891c57754777938ed89cd2dec3efb4ad72] <==
	I0815 17:52:53.017503       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0815 17:52:53.017665       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0815 17:52:53.039441       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0815 17:52:53.039538       1 server_others.go:185] Using iptables Proxier.
	I0815 17:52:53.039941       1 server.go:650] Version: v1.20.0
	I0815 17:52:53.040848       1 config.go:315] Starting service config controller
	I0815 17:52:53.040864       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0815 17:52:53.040890       1 config.go:224] Starting endpoint slice config controller
	I0815 17:52:53.040894       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0815 17:52:53.141022       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0815 17:52:53.141213       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [27a9247e670f991144c4c1a3eb30e38e561602852ee61c4cde95b747995cb666] <==
	I0815 17:49:50.907584       1 serving.go:331] Generated self-signed cert in-memory
	W0815 17:49:55.643469       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 17:49:55.643670       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 17:49:55.643761       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 17:49:55.643846       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 17:49:55.729077       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0815 17:49:55.729969       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 17:49:55.730120       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 17:49:55.730245       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0815 17:49:55.741452       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0815 17:49:55.741691       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0815 17:49:55.741774       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0815 17:49:55.741849       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0815 17:49:55.741913       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 17:49:55.741977       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0815 17:49:55.742041       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0815 17:49:55.742112       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0815 17:49:55.742562       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0815 17:49:55.753507       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0815 17:49:55.753610       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0815 17:49:55.753693       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0815 17:49:56.628581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0815 17:49:56.664475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0815 17:49:57.130358       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [3cc1b8ca6d69b1ac83fbb3387d914376ff8c4cfaeedff122c527dd34f2de5065] <==
	I0815 17:52:44.921551       1 serving.go:331] Generated self-signed cert in-memory
	W0815 17:52:50.325370       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0815 17:52:50.325401       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0815 17:52:50.325410       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0815 17:52:50.325415       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0815 17:52:50.557835       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0815 17:52:50.558677       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 17:52:50.561053       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0815 17:52:50.561263       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0815 17:52:50.765056       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Aug 15 17:57:11 old-k8s-version-460705 kubelet[665]: E0815 17:57:11.264783     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 17:57:14 old-k8s-version-460705 kubelet[665]: I0815 17:57:14.261180     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: f8767917ba9bd125b6308e096eb46c6b5dd903f034c914f296e062543760bdb1
	Aug 15 17:57:14 old-k8s-version-460705 kubelet[665]: E0815 17:57:14.262035     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	Aug 15 17:57:25 old-k8s-version-460705 kubelet[665]: E0815 17:57:25.262627     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 17:57:28 old-k8s-version-460705 kubelet[665]: I0815 17:57:28.261123     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: f8767917ba9bd125b6308e096eb46c6b5dd903f034c914f296e062543760bdb1
	Aug 15 17:57:28 old-k8s-version-460705 kubelet[665]: E0815 17:57:28.261540     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	Aug 15 17:57:38 old-k8s-version-460705 kubelet[665]: E0815 17:57:38.261980     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 17:57:41 old-k8s-version-460705 kubelet[665]: I0815 17:57:41.261232     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: f8767917ba9bd125b6308e096eb46c6b5dd903f034c914f296e062543760bdb1
	Aug 15 17:57:41 old-k8s-version-460705 kubelet[665]: E0815 17:57:41.262122     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	Aug 15 17:57:53 old-k8s-version-460705 kubelet[665]: E0815 17:57:53.262019     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 17:57:56 old-k8s-version-460705 kubelet[665]: I0815 17:57:56.261095     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: f8767917ba9bd125b6308e096eb46c6b5dd903f034c914f296e062543760bdb1
	Aug 15 17:57:56 old-k8s-version-460705 kubelet[665]: E0815 17:57:56.268331     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	Aug 15 17:58:04 old-k8s-version-460705 kubelet[665]: E0815 17:58:04.261881     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 17:58:10 old-k8s-version-460705 kubelet[665]: I0815 17:58:10.261181     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: f8767917ba9bd125b6308e096eb46c6b5dd903f034c914f296e062543760bdb1
	Aug 15 17:58:10 old-k8s-version-460705 kubelet[665]: E0815 17:58:10.261544     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	Aug 15 17:58:15 old-k8s-version-460705 kubelet[665]: E0815 17:58:15.265484     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 17:58:24 old-k8s-version-460705 kubelet[665]: I0815 17:58:24.261098     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: f8767917ba9bd125b6308e096eb46c6b5dd903f034c914f296e062543760bdb1
	Aug 15 17:58:24 old-k8s-version-460705 kubelet[665]: E0815 17:58:24.261897     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	Aug 15 17:58:26 old-k8s-version-460705 kubelet[665]: E0815 17:58:26.262340     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 15 17:58:37 old-k8s-version-460705 kubelet[665]: E0815 17:58:37.307268     665 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 15 17:58:37 old-k8s-version-460705 kubelet[665]: E0815 17:58:37.307726     665 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 15 17:58:37 old-k8s-version-460705 kubelet[665]: E0815 17:58:37.307965     665 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-fcq8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-wd4q2_kube-system(55e12ec
7-9686-43b9-abb4-2e1948bdb964): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 15 17:58:37 old-k8s-version-460705 kubelet[665]: E0815 17:58:37.308217     665 pod_workers.go:191] Error syncing pod 55e12ec7-9686-43b9-abb4-2e1948bdb964 ("metrics-server-9975d5f86-wd4q2_kube-system(55e12ec7-9686-43b9-abb4-2e1948bdb964)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 15 17:58:38 old-k8s-version-460705 kubelet[665]: I0815 17:58:38.261058     665 scope.go:95] [topologymanager] RemoveContainer - Container ID: f8767917ba9bd125b6308e096eb46c6b5dd903f034c914f296e062543760bdb1
	Aug 15 17:58:38 old-k8s-version-460705 kubelet[665]: E0815 17:58:38.261468     665 pod_workers.go:191] Error syncing pod 601fa193-1e61-4253-946e-804782a0e79e ("dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-bjqpx_kubernetes-dashboard(601fa193-1e61-4253-946e-804782a0e79e)"
	
	
	==> kubernetes-dashboard [e83d89c3f120386eebcc2727e9273c7d2b41c2b4d9b773f0e5c9da2502928364] <==
	2024/08/15 17:53:18 Starting overwatch
	2024/08/15 17:53:18 Using namespace: kubernetes-dashboard
	2024/08/15 17:53:18 Using in-cluster config to connect to apiserver
	2024/08/15 17:53:18 Using secret token for csrf signing
	2024/08/15 17:53:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/15 17:53:19 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/15 17:53:19 Successful initial request to the apiserver, version: v1.20.0
	2024/08/15 17:53:19 Generating JWE encryption key
	2024/08/15 17:53:19 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/15 17:53:19 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/15 17:53:19 Initializing JWE encryption key from synchronized object
	2024/08/15 17:53:19 Creating in-cluster Sidecar client
	2024/08/15 17:53:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 17:53:19 Serving insecurely on HTTP port: 9090
	2024/08/15 17:53:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 17:54:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 17:54:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 17:55:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 17:55:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 17:56:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 17:56:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 17:57:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 17:57:49 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/15 17:58:19 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [03fba565862a7760e221d339f5b4f907f0d8ee3b1f70a20b20c831afdcbeca47] <==
	I0815 17:52:52.466744       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0815 17:53:22.468784       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ff61a35c85dd2bb5094fad476aaf023a33b37d52fe21e8249ef38acfc459ec95] <==
	I0815 17:53:34.447018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0815 17:53:34.467049       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0815 17:53:34.467331       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0815 17:53:51.950959       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0815 17:53:51.954772       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-460705_5ae179db-6b5e-4c52-8d54-07022649be7a!
	I0815 17:53:51.966063       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"671bd55f-0bae-4e2a-97a0-1fb99c05f7f1", APIVersion:"v1", ResourceVersion:"841", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-460705_5ae179db-6b5e-4c52-8d54-07022649be7a became leader
	I0815 17:53:52.055283       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-460705_5ae179db-6b5e-4c52-8d54-07022649be7a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-460705 -n old-k8s-version-460705
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-460705 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-wd4q2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-460705 describe pod metrics-server-9975d5f86-wd4q2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-460705 describe pod metrics-server-9975d5f86-wd4q2: exit status 1 (114.46452ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-wd4q2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-460705 describe pod metrics-server-9975d5f86-wd4q2: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (380.28s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.13
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 6.21
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 215.86
31 TestAddons/serial/GCPAuth/Namespaces 0.17
33 TestAddons/parallel/Registry 15.19
34 TestAddons/parallel/Ingress 20
35 TestAddons/parallel/InspektorGadget 11.21
36 TestAddons/parallel/MetricsServer 5.83
39 TestAddons/parallel/CSI 57.98
40 TestAddons/parallel/Headlamp 16.02
41 TestAddons/parallel/CloudSpanner 5.63
42 TestAddons/parallel/LocalPath 8.91
43 TestAddons/parallel/NvidiaDevicePlugin 5.56
44 TestAddons/parallel/Yakd 11.88
45 TestAddons/StoppedEnableDisable 12.31
46 TestCertOptions 31.05
47 TestCertExpiration 226.23
49 TestForceSystemdFlag 38.87
50 TestForceSystemdEnv 43.04
51 TestDockerEnvContainerd 44.53
56 TestErrorSpam/setup 31.79
57 TestErrorSpam/start 0.75
58 TestErrorSpam/status 1.2
59 TestErrorSpam/pause 1.77
60 TestErrorSpam/unpause 1.89
61 TestErrorSpam/stop 1.51
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 57
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 7.25
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.61
73 TestFunctional/serial/CacheCmd/cache/add_local 1.35
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.21
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 44.65
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.67
84 TestFunctional/serial/LogsFileCmd 1.66
85 TestFunctional/serial/InvalidService 4.99
87 TestFunctional/parallel/ConfigCmd 0.45
88 TestFunctional/parallel/DashboardCmd 10.94
89 TestFunctional/parallel/DryRun 0.68
90 TestFunctional/parallel/InternationalLanguage 0.27
91 TestFunctional/parallel/StatusCmd 1.08
95 TestFunctional/parallel/ServiceCmdConnect 8.59
96 TestFunctional/parallel/AddonsCmd 0.15
97 TestFunctional/parallel/PersistentVolumeClaim 24.91
99 TestFunctional/parallel/SSHCmd 0.67
100 TestFunctional/parallel/CpCmd 1.99
102 TestFunctional/parallel/FileSync 0.36
103 TestFunctional/parallel/CertSync 2.06
107 TestFunctional/parallel/NodeLabels 0.1
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
111 TestFunctional/parallel/License 0.23
112 TestFunctional/parallel/Version/short 0.13
113 TestFunctional/parallel/Version/components 1.36
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.8
119 TestFunctional/parallel/ImageCommands/Setup 0.77
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.49
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.36
125 TestFunctional/parallel/ServiceCmd/DeployApp 11.29
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.36
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.98
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.35
136 TestFunctional/parallel/ServiceCmd/List 0.33
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.34
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
139 TestFunctional/parallel/ServiceCmd/Format 0.36
140 TestFunctional/parallel/ServiceCmd/URL 0.36
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
148 TestFunctional/parallel/ProfileCmd/profile_list 0.4
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
150 TestFunctional/parallel/MountCmd/any-port 8.09
151 TestFunctional/parallel/MountCmd/specific-port 1.92
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.02
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 108.49
160 TestMultiControlPlane/serial/DeployApp 32.44
161 TestMultiControlPlane/serial/PingHostFromPods 1.53
162 TestMultiControlPlane/serial/AddWorkerNode 20.98
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.8
165 TestMultiControlPlane/serial/CopyFile 19.12
166 TestMultiControlPlane/serial/StopSecondaryNode 12.87
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.93
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.77
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 130.82
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.66
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
173 TestMultiControlPlane/serial/StopCluster 36.03
174 TestMultiControlPlane/serial/RestartCluster 42.13
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
176 TestMultiControlPlane/serial/AddSecondaryNode 43.87
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.1
181 TestJSONOutput/start/Command 51.13
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 1.06
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.66
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.79
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 36.32
207 TestKicCustomNetwork/use_default_bridge_network 32.09
208 TestKicExistingNetwork 30.76
209 TestKicCustomSubnet 34.56
210 TestKicStaticIP 38.01
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 66.12
215 TestMountStart/serial/StartWithMountFirst 6.69
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 6.86
218 TestMountStart/serial/VerifyMountSecond 0.27
219 TestMountStart/serial/DeleteFirst 1.61
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.19
222 TestMountStart/serial/RestartStopped 7.47
223 TestMountStart/serial/VerifyMountPostStop 0.27
226 TestMultiNode/serial/FreshStart2Nodes 64.86
227 TestMultiNode/serial/DeployApp2Nodes 17.17
228 TestMultiNode/serial/PingHostFrom2Pods 1.11
229 TestMultiNode/serial/AddNode 19.64
230 TestMultiNode/serial/MultiNodeLabels 0.11
231 TestMultiNode/serial/ProfileList 0.35
232 TestMultiNode/serial/CopyFile 9.91
233 TestMultiNode/serial/StopNode 2.26
234 TestMultiNode/serial/StartAfterStop 9.88
235 TestMultiNode/serial/RestartKeepsNodes 90.14
236 TestMultiNode/serial/DeleteNode 5.51
237 TestMultiNode/serial/StopMultiNode 24.06
238 TestMultiNode/serial/RestartMultiNode 53.17
239 TestMultiNode/serial/ValidateNameConflict 35.01
244 TestPreload 132.56
246 TestScheduledStopUnix 107.02
249 TestInsufficientStorage 10.05
250 TestRunningBinaryUpgrade 83.54
252 TestKubernetesUpgrade 103.61
253 TestMissingContainerUpgrade 164.45
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 41.66
257 TestNoKubernetes/serial/StartWithStopK8s 18.1
258 TestNoKubernetes/serial/Start 5.7
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
260 TestNoKubernetes/serial/ProfileList 1.19
261 TestNoKubernetes/serial/Stop 1.24
262 TestNoKubernetes/serial/StartNoArgs 7.04
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.33
264 TestStoppedBinaryUpgrade/Setup 0.87
265 TestStoppedBinaryUpgrade/Upgrade 119.94
274 TestPause/serial/Start 59.36
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.24
276 TestPause/serial/SecondStartNoReconfiguration 7.6
280 TestPause/serial/Pause 0.93
285 TestNetworkPlugins/group/false 5.67
286 TestPause/serial/VerifyStatus 0.38
287 TestPause/serial/Unpause 0.88
288 TestPause/serial/PauseAgain 1.12
289 TestPause/serial/DeletePaused 2.98
293 TestPause/serial/VerifyDeletedResources 0.19
295 TestStartStop/group/old-k8s-version/serial/FirstStart 163.33
297 TestStartStop/group/no-preload/serial/FirstStart 72.54
298 TestStartStop/group/old-k8s-version/serial/DeployApp 9.7
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.64
300 TestStartStop/group/old-k8s-version/serial/Stop 12.47
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
303 TestStartStop/group/no-preload/serial/DeployApp 7.44
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.66
305 TestStartStop/group/no-preload/serial/Stop 12.29
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/no-preload/serial/SecondStart 268.8
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
311 TestStartStop/group/no-preload/serial/Pause 3.12
313 TestStartStop/group/embed-certs/serial/FirstStart 53.79
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
317 TestStartStop/group/old-k8s-version/serial/Pause 2.87
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.45
320 TestStartStop/group/embed-certs/serial/DeployApp 8.48
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.55
322 TestStartStop/group/embed-certs/serial/Stop 12.35
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
324 TestStartStop/group/embed-certs/serial/SecondStart 270.22
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.42
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.23
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.08
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 278.08
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.11
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
333 TestStartStop/group/embed-certs/serial/Pause 3.05
335 TestStartStop/group/newest-cni/serial/FirstStart 36.43
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.63
338 TestStartStop/group/newest-cni/serial/Stop 1.3
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
340 TestStartStop/group/newest-cni/serial/SecondStart 14.44
341 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.38
344 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.46
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.53
348 TestStartStop/group/newest-cni/serial/Pause 4.68
349 TestNetworkPlugins/group/auto/Start 72.87
350 TestNetworkPlugins/group/kindnet/Start 72.11
351 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
352 TestNetworkPlugins/group/auto/KubeletFlags 0.28
353 TestNetworkPlugins/group/auto/NetCatPod 9.27
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
355 TestNetworkPlugins/group/kindnet/NetCatPod 8.29
356 TestNetworkPlugins/group/auto/DNS 0.17
357 TestNetworkPlugins/group/auto/Localhost 0.17
358 TestNetworkPlugins/group/auto/HairPin 0.15
359 TestNetworkPlugins/group/kindnet/DNS 0.18
360 TestNetworkPlugins/group/kindnet/Localhost 0.14
361 TestNetworkPlugins/group/kindnet/HairPin 0.14
362 TestNetworkPlugins/group/calico/Start 72.51
363 TestNetworkPlugins/group/custom-flannel/Start 58.35
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.27
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/custom-flannel/DNS 0.23
368 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
369 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
370 TestNetworkPlugins/group/calico/KubeletFlags 0.29
371 TestNetworkPlugins/group/calico/NetCatPod 11.25
372 TestNetworkPlugins/group/calico/DNS 0.22
373 TestNetworkPlugins/group/calico/Localhost 0.24
374 TestNetworkPlugins/group/calico/HairPin 0.28
375 TestNetworkPlugins/group/enable-default-cni/Start 82.13
376 TestNetworkPlugins/group/flannel/Start 58.13
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
379 TestNetworkPlugins/group/flannel/NetCatPod 9.25
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.37
382 TestNetworkPlugins/group/flannel/DNS 0.17
383 TestNetworkPlugins/group/flannel/Localhost 0.16
384 TestNetworkPlugins/group/flannel/HairPin 0.15
385 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
386 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
388 TestNetworkPlugins/group/bridge/Start 75.8
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
390 TestNetworkPlugins/group/bridge/NetCatPod 9.27
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (11.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-549752 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-549752 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.129589677s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-549752
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-549752: exit status 85 (69.963523ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-549752 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |          |
	|         | -p download-only-549752        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:05:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:05:05.291235  298135 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:05.291400  298135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:05.291411  298135 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:05.291416  298135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:05.291651  298135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
	W0815 17:05:05.291789  298135 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19450-292730/.minikube/config/config.json: open /home/jenkins/minikube-integration/19450-292730/.minikube/config/config.json: no such file or directory
	I0815 17:05:05.292184  298135 out.go:352] Setting JSON to true
	I0815 17:05:05.293033  298135 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6449,"bootTime":1723735057,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 17:05:05.293103  298135 start.go:139] virtualization:  
	I0815 17:05:05.295913  298135 out.go:97] [download-only-549752] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0815 17:05:05.296057  298135 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball: no such file or directory
	I0815 17:05:05.296099  298135 notify.go:220] Checking for updates...
	I0815 17:05:05.298305  298135 out.go:169] MINIKUBE_LOCATION=19450
	I0815 17:05:05.299918  298135 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:05.302117  298135 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	I0815 17:05:05.303946  298135 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	I0815 17:05:05.305577  298135 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0815 17:05:05.309044  298135 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 17:05:05.309740  298135 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:05.330442  298135 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:05:05.330535  298135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:05:05.388096  298135 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-15 17:05:05.378358613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:05:05.388207  298135 docker.go:307] overlay module found
	I0815 17:05:05.390205  298135 out.go:97] Using the docker driver based on user configuration
	I0815 17:05:05.390236  298135 start.go:297] selected driver: docker
	I0815 17:05:05.390246  298135 start.go:901] validating driver "docker" against <nil>
	I0815 17:05:05.390354  298135 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:05:05.441961  298135 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-15 17:05:05.43265538 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:05:05.442133  298135 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:05:05.442414  298135 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0815 17:05:05.442567  298135 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 17:05:05.445253  298135 out.go:169] Using Docker driver with root privileges
	I0815 17:05:05.447229  298135 cni.go:84] Creating CNI manager for ""
	I0815 17:05:05.447249  298135 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 17:05:05.447261  298135 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 17:05:05.447345  298135 start.go:340] cluster config:
	{Name:download-only-549752 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-549752 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:05.449319  298135 out.go:97] Starting "download-only-549752" primary control-plane node in "download-only-549752" cluster
	I0815 17:05:05.449353  298135 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0815 17:05:05.451156  298135 out.go:97] Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:05:05.451180  298135 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0815 17:05:05.451337  298135 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 17:05:05.466468  298135 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:05:05.466641  298135 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:05:05.466738  298135 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:05:05.513579  298135 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0815 17:05:05.513604  298135 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:05.513774  298135 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0815 17:05:05.515928  298135 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0815 17:05:05.515956  298135 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0815 17:05:05.609273  298135 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0815 17:05:10.629154  298135 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:05:11.067252  298135 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0815 17:05:11.067365  298135 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0815 17:05:12.156877  298135 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0815 17:05:12.157282  298135 profile.go:143] Saving config to /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/download-only-549752/config.json ...
	I0815 17:05:12.157316  298135 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/download-only-549752/config.json: {Name:mkb22f848ada60692ce7563ab6f0187c7fe50793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0815 17:05:12.158042  298135 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0815 17:05:12.158609  298135 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19450-292730/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-549752 host does not exist
	  To start a cluster, run: "minikube start -p download-only-549752"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-549752
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-473657 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-473657 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.205734963s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-473657
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-473657: exit status 85 (71.613965ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-549752 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | -p download-only-549752        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| delete  | -p download-only-549752        | download-only-549752 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC | 15 Aug 24 17:05 UTC |
	| start   | -o=json --download-only        | download-only-473657 | jenkins | v1.33.1 | 15 Aug 24 17:05 UTC |                     |
	|         | -p download-only-473657        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/15 17:05:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0815 17:05:16.824942  298339 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:05:16.825070  298339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:16.825081  298339 out.go:358] Setting ErrFile to fd 2...
	I0815 17:05:16.825086  298339 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:05:16.825329  298339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
	I0815 17:05:16.825759  298339 out.go:352] Setting JSON to true
	I0815 17:05:16.826602  298339 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6460,"bootTime":1723735057,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 17:05:16.826668  298339 start.go:139] virtualization:  
	I0815 17:05:16.828989  298339 out.go:97] [download-only-473657] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 17:05:16.829185  298339 notify.go:220] Checking for updates...
	I0815 17:05:16.831068  298339 out.go:169] MINIKUBE_LOCATION=19450
	I0815 17:05:16.833015  298339 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:05:16.835171  298339 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	I0815 17:05:16.836932  298339 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	I0815 17:05:16.838764  298339 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0815 17:05:16.842370  298339 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0815 17:05:16.842640  298339 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:05:16.873257  298339 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:05:16.873360  298339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:05:16.926735  298339 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 17:05:16.917580617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:05:16.926844  298339 docker.go:307] overlay module found
	I0815 17:05:16.928753  298339 out.go:97] Using the docker driver based on user configuration
	I0815 17:05:16.928783  298339 start.go:297] selected driver: docker
	I0815 17:05:16.928789  298339 start.go:901] validating driver "docker" against <nil>
	I0815 17:05:16.928903  298339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:05:16.984967  298339 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-15 17:05:16.975628428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:05:16.985175  298339 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0815 17:05:16.985479  298339 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0815 17:05:16.985644  298339 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0815 17:05:16.987424  298339 out.go:169] Using Docker driver with root privileges
	I0815 17:05:16.989219  298339 cni.go:84] Creating CNI manager for ""
	I0815 17:05:16.989245  298339 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0815 17:05:16.989256  298339 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0815 17:05:16.989329  298339 start.go:340] cluster config:
	{Name:download-only-473657 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-473657 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:05:16.991031  298339 out.go:97] Starting "download-only-473657" primary control-plane node in "download-only-473657" cluster
	I0815 17:05:16.991052  298339 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0815 17:05:16.992790  298339 out.go:97] Pulling base image v0.0.44-1723650208-19443 ...
	I0815 17:05:16.992816  298339 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 17:05:16.992993  298339 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local docker daemon
	I0815 17:05:17.009556  298339 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 to local cache
	I0815 17:05:17.009674  298339 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory
	I0815 17:05:17.009705  298339 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 in local cache directory, skipping pull
	I0815 17:05:17.009715  298339 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 exists in cache, skipping pull
	I0815 17:05:17.009723  298339 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 as a tarball
	I0815 17:05:17.049800  298339 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0815 17:05:17.049827  298339 cache.go:56] Caching tarball of preloaded images
	I0815 17:05:17.050375  298339 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0815 17:05:17.052535  298339 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0815 17:05:17.052581  298339 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0815 17:05:17.137658  298339 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0815 17:05:21.343449  298339 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0815 17:05:21.343572  298339 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19450-292730/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-473657 host does not exist
	  To start a cluster, run: "minikube start -p download-only-473657"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-473657
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-748422 --alsologtostderr --binary-mirror http://127.0.0.1:41401 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-748422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-748422
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-773218
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-773218: exit status 85 (68.910202ms)

                                                
                                                
-- stdout --
	* Profile "addons-773218" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-773218"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-773218
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-773218: exit status 85 (64.089122ms)

                                                
                                                
-- stdout --
	* Profile "addons-773218" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-773218"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (215.86s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-773218 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-773218 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m35.862110011s)
--- PASS: TestAddons/Setup (215.86s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-773218 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-773218 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.643587ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-t6znz" [d6170f0f-298c-44dc-bd48-48bc98e610d4] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003417893s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2294p" [f8ae5fc1-3cb4-4610-b95a-966036ad420a] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004754644s
addons_test.go:342: (dbg) Run:  kubectl --context addons-773218 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-773218 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-773218 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.131890143s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 ip
2024/08/15 17:12:54 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.19s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-773218 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-773218 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-773218 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4bcaacef-d8d2-4256-aee9-fa33009655ee] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4bcaacef-d8d2-4256-aee9-fa33009655ee] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004015658s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-773218 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-773218 addons disable ingress-dns --alsologtostderr -v=1: (1.479487084s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-773218 addons disable ingress --alsologtostderr -v=1: (7.830572446s)
--- PASS: TestAddons/parallel/Ingress (20.00s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-gczvz" [dce162f9-e608-40f3-a4c8-19dceee07e8f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.006096857s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-773218
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-773218: (6.205428952s)
--- PASS: TestAddons/parallel/InspektorGadget (11.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.434341ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-pbx6n" [1ee3a754-0bfc-4950-bfb9-8f7863cb518e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004106635s
addons_test.go:417: (dbg) Run:  kubectl --context addons-773218 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.603839ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-773218 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-773218 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1b4cfd05-b52e-45ef-9257-a37d08c4ce48] Pending
helpers_test.go:344: "task-pv-pod" [1b4cfd05-b52e-45ef-9257-a37d08c4ce48] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1b4cfd05-b52e-45ef-9257-a37d08c4ce48] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.019881228s
addons_test.go:590: (dbg) Run:  kubectl --context addons-773218 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-773218 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-773218 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-773218 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-773218 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-773218 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-773218 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f0694dc1-2f41-4851-a7a1-bf92dd9ed246] Pending
helpers_test.go:344: "task-pv-pod-restore" [f0694dc1-2f41-4851-a7a1-bf92dd9ed246] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f0694dc1-2f41-4851-a7a1-bf92dd9ed246] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00726775s
addons_test.go:632: (dbg) Run:  kubectl --context addons-773218 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-773218 delete pod task-pv-pod-restore: (1.049950619s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-773218 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-773218 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-773218 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.922075728s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-773218 addons disable volumesnapshots --alsologtostderr -v=1: (1.24432302s)
--- PASS: TestAddons/parallel/CSI (57.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.02s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-773218 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-773218 --alsologtostderr -v=1: (1.065933678s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-c4bvp" [bf6ebf7f-fef5-4cec-97d0-4162b66d8ab6] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-c4bvp" [bf6ebf7f-fef5-4cec-97d0-4162b66d8ab6] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-c4bvp" [bf6ebf7f-fef5-4cec-97d0-4162b66d8ab6] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004867027s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-773218 addons disable headlamp --alsologtostderr -v=1: (5.950500296s)
--- PASS: TestAddons/parallel/Headlamp (16.02s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-mjslr" [b3ec5c28-b5b3-422f-b36d-d6fd27ead209] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003972394s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-773218
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.91s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-773218 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-773218 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-773218 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [21224a26-27a1-408a-9bb2-30cbe3c9a268] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [21224a26-27a1-408a-9bb2-30cbe3c9a268] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [21224a26-27a1-408a-9bb2-30cbe3c9a268] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003915782s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-773218 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 ssh "cat /opt/local-path-provisioner/pvc-f7272129-e9c7-4285-8be1-571ecc2582be_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-773218 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-773218 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.91s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-jm8xf" [3767faf1-4959-47be-99ef-741d4904feca] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004689687s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-773218
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-mmx8g" [9dcb9169-4033-4c0b-8af3-f8a4fe6a7aab] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004283139s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-773218 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-773218 addons disable yakd --alsologtostderr -v=1: (5.875776128s)
--- PASS: TestAddons/parallel/Yakd (11.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-773218
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-773218: (12.036718419s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-773218
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-773218
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-773218
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (31.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-559985 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0815 17:49:00.704398  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-559985 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (28.318190467s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-559985 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-559985 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-559985 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-559985" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-559985
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-559985: (2.053252609s)
--- PASS: TestCertOptions (31.05s)

                                                
                                    
x
+
TestCertExpiration (226.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-900222 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0815 17:48:06.571415  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-900222 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (36.477263191s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-900222 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-900222 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.383908001s)
helpers_test.go:175: Cleaning up "cert-expiration-900222" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-900222
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-900222: (2.363029358s)
--- PASS: TestCertExpiration (226.23s)

                                                
                                    
x
+
TestForceSystemdFlag (38.87s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-004742 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-004742 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.202818454s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-004742 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-004742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-004742
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-004742: (2.24771767s)
--- PASS: TestForceSystemdFlag (38.87s)

                                                
                                    
x
+
TestForceSystemdEnv (43.04s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-814095 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-814095 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.417003389s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-814095 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-814095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-814095
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-814095: (2.233003303s)
--- PASS: TestForceSystemdEnv (43.04s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.53s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-653615 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-653615 --driver=docker  --container-runtime=containerd: (28.791139971s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-653615"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-653615": (1.002368326s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-k0s5MBYqvQD8/agent.316834" SSH_AGENT_PID="316835" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-k0s5MBYqvQD8/agent.316834" SSH_AGENT_PID="316835" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-k0s5MBYqvQD8/agent.316834" SSH_AGENT_PID="316835" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.166398522s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-k0s5MBYqvQD8/agent.316834" SSH_AGENT_PID="316835" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-653615" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-653615
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-653615: (2.121542429s)
--- PASS: TestDockerEnvContainerd (44.53s)

                                                
                                    
x
+
TestErrorSpam/setup (31.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-426711 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-426711 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-426711 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-426711 --driver=docker  --container-runtime=containerd: (31.78735249s)
--- PASS: TestErrorSpam/setup (31.79s)

                                                
                                    
x
+
TestErrorSpam/start (0.75s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 start --dry-run
--- PASS: TestErrorSpam/start (0.75s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (1.77s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 pause
--- PASS: TestErrorSpam/pause (1.77s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 unpause
--- PASS: TestErrorSpam/unpause (1.89s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 stop: (1.327093709s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-426711 --log_dir /tmp/nospam-426711 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19450-292730/.minikube/files/etc/test/nested/copy/298130/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-423031 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-423031 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (56.995570766s)
--- PASS: TestFunctional/serial/StartWithProxy (57.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.25s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-423031 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-423031 --alsologtostderr -v=8: (7.246983403s)
functional_test.go:663: soft start took 7.25139126s for "functional-423031" cluster.
--- PASS: TestFunctional/serial/SoftStart (7.25s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-423031 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-423031 cache add registry.k8s.io/pause:3.1: (1.838600548s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-423031 cache add registry.k8s.io/pause:3.3: (1.461944284s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-423031 cache add registry.k8s.io/pause:latest: (1.305326252s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-423031 /tmp/TestFunctionalserialCacheCmdcacheadd_local3708890147/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 cache add minikube-local-cache-test:functional-423031
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 cache delete minikube-local-cache-test:functional-423031
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-423031
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-423031 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (305.656979ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-423031 cache reload: (1.187414609s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 kubectl -- --context functional-423031 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-423031 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.65s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-423031 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-423031 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.64666555s)
functional_test.go:761: restart took 44.646791793s for "functional-423031" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.65s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-423031 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-423031 logs: (1.670435981s)
--- PASS: TestFunctional/serial/LogsCmd (1.67s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 logs --file /tmp/TestFunctionalserialLogsFileCmd358646115/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-423031 logs --file /tmp/TestFunctionalserialLogsFileCmd358646115/001/logs.txt: (1.655048007s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.66s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.99s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-423031 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-423031
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-423031: exit status 115 (833.216543ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30241 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-423031 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.99s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-423031 config get cpus: exit status 14 (78.046035ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-423031 config get cpus: exit status 14 (75.399766ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-423031 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-423031 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 333916: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.94s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-423031 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-423031 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (324.868703ms)

                                                
                                                
-- stdout --
	* [functional-423031] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:18:44.325419  333478 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:18:44.325620  333478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:18:44.325644  333478 out.go:358] Setting ErrFile to fd 2...
	I0815 17:18:44.325662  333478 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:18:44.325922  333478 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
	I0815 17:18:44.326328  333478 out.go:352] Setting JSON to false
	I0815 17:18:44.327370  333478 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7268,"bootTime":1723735057,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 17:18:44.327470  333478 start.go:139] virtualization:  
	I0815 17:18:44.332237  333478 out.go:177] * [functional-423031] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 17:18:44.335598  333478 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:18:44.335659  333478 notify.go:220] Checking for updates...
	I0815 17:18:44.338318  333478 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:18:44.340249  333478 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	I0815 17:18:44.353336  333478 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	I0815 17:18:44.355324  333478 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 17:18:44.357189  333478 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:18:44.359617  333478 config.go:182] Loaded profile config "functional-423031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 17:18:44.360171  333478 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:18:44.407741  333478 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:18:44.407851  333478 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:18:44.552498  333478 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-15 17:18:44.534314922 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:18:44.552603  333478 docker.go:307] overlay module found
	I0815 17:18:44.555163  333478 out.go:177] * Using the docker driver based on existing profile
	I0815 17:18:44.557457  333478 start.go:297] selected driver: docker
	I0815 17:18:44.557474  333478 start.go:901] validating driver "docker" against &{Name:functional-423031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-423031 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:18:44.557567  333478 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:18:44.561140  333478 out.go:201] 
	W0815 17:18:44.563500  333478 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0815 17:18:44.565574  333478 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-423031 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-423031 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-423031 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (273.944229ms)

                                                
                                                
-- stdout --
	* [functional-423031] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:18:44.373946  333483 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:18:44.374128  333483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:18:44.374140  333483 out.go:358] Setting ErrFile to fd 2...
	I0815 17:18:44.374146  333483 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:18:44.375056  333483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
	I0815 17:18:44.385672  333483 out.go:352] Setting JSON to false
	I0815 17:18:44.386732  333483 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7268,"bootTime":1723735057,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 17:18:44.386871  333483 start.go:139] virtualization:  
	I0815 17:18:44.390034  333483 out.go:177] * [functional-423031] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0815 17:18:44.392106  333483 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:18:44.392184  333483 notify.go:220] Checking for updates...
	I0815 17:18:44.396470  333483 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:18:44.398295  333483 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	I0815 17:18:44.401868  333483 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	I0815 17:18:44.403874  333483 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 17:18:44.406980  333483 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:18:44.409520  333483 config.go:182] Loaded profile config "functional-423031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 17:18:44.410194  333483 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:18:44.436393  333483 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:18:44.436542  333483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:18:44.526898  333483 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-15 17:18:44.515957542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:18:44.527009  333483 docker.go:307] overlay module found
	I0815 17:18:44.529884  333483 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0815 17:18:44.531984  333483 start.go:297] selected driver: docker
	I0815 17:18:44.532002  333483 start.go:901] validating driver "docker" against &{Name:functional-423031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723650208-19443@sha256:2be48dc5c74cde3c1d15ac913a640f4a2331b48358b81777568fb487d2757002 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-423031 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0815 17:18:44.532107  333483 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:18:44.534774  333483 out.go:201] 
	W0815 17:18:44.538403  333483 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0815 17:18:44.540723  333483 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-423031 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-423031 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-4bmkq" [1bdfe18b-13a4-43cc-a6c4-b1f30d8867d7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-4bmkq" [1bdfe18b-13a4-43cc-a6c4-b1f30d8867d7] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004001038s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32500
functional_test.go:1675: http://192.168.49.2:32500: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-4bmkq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32500
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [942f4b48-fa7a-49fb-a700-ede5f12802fe] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003615617s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-423031 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-423031 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-423031 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-423031 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [800be457-783a-457e-9f8b-83d7a709d74f] Pending
helpers_test.go:344: "sp-pod" [800be457-783a-457e-9f8b-83d7a709d74f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [800be457-783a-457e-9f8b-83d7a709d74f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004300571s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-423031 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-423031 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-423031 delete -f testdata/storage-provisioner/pod.yaml: (1.816023486s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-423031 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [aabecd88-1cd1-46bd-aea3-1f8509abe7f9] Pending
helpers_test.go:344: "sp-pod" [aabecd88-1cd1-46bd-aea3-1f8509abe7f9] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004811402s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-423031 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh -n functional-423031 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 cp functional-423031:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2193179846/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh -n functional-423031 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh -n functional-423031 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/298130/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "sudo cat /etc/test/nested/copy/298130/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/298130.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "sudo cat /etc/ssl/certs/298130.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/298130.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "sudo cat /usr/share/ca-certificates/298130.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2981302.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "sudo cat /etc/ssl/certs/2981302.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2981302.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "sudo cat /usr/share/ca-certificates/2981302.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-423031 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-423031 ssh "sudo systemctl is-active docker": exit status 1 (339.526876ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-423031 ssh "sudo systemctl is-active crio": exit status 1 (334.425064ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 version --short
--- PASS: TestFunctional/parallel/Version/short (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-423031 version -o=json --components: (1.358844899s)
--- PASS: TestFunctional/parallel/Version/components (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-423031 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-423031
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-423031
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-423031 image ls --format short --alsologtostderr:
I0815 17:18:47.047892  334000 out.go:345] Setting OutFile to fd 1 ...
I0815 17:18:47.048027  334000 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:18:47.048039  334000 out.go:358] Setting ErrFile to fd 2...
I0815 17:18:47.048045  334000 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:18:47.048278  334000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
I0815 17:18:47.048975  334000 config.go:182] Loaded profile config "functional-423031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 17:18:47.049112  334000 config.go:182] Loaded profile config "functional-423031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 17:18:47.049641  334000 cli_runner.go:164] Run: docker container inspect functional-423031 --format={{.State.Status}}
I0815 17:18:47.066881  334000 ssh_runner.go:195] Run: systemctl --version
I0815 17:18:47.066932  334000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-423031
I0815 17:18:47.083274  334000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/functional-423031/id_rsa Username:docker}
I0815 17:18:47.177533  334000 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-423031 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/minikube-local-cache-test | functional-423031  | sha256:97b34c | 992B   |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | latest             | sha256:235ff2 | 67.6MB |
| localhost/my-image                          | functional-423031  | sha256:207427 | 831kB  |
| docker.io/library/nginx                     | alpine             | sha256:d7cd33 | 18.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| docker.io/kicbase/echo-server               | functional-423031  | sha256:ce2d2c | 2.17MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-423031 image ls --format table --alsologtostderr:
I0815 17:18:50.567343  334507 out.go:345] Setting OutFile to fd 1 ...
I0815 17:18:50.567559  334507 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:18:50.567589  334507 out.go:358] Setting ErrFile to fd 2...
I0815 17:18:50.567608  334507 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:18:50.567882  334507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
I0815 17:18:50.568555  334507 config.go:182] Loaded profile config "functional-423031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 17:18:50.568736  334507 config.go:182] Loaded profile config "functional-423031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 17:18:50.569325  334507 cli_runner.go:164] Run: docker container inspect functional-423031 --format={{.State.Status}}
I0815 17:18:50.586898  334507 ssh_runner.go:195] Run: systemctl --version
I0815 17:18:50.586948  334507 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-423031
I0815 17:18:50.604343  334507 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/functional-423031/id_rsa Username:docker}
I0815 17:18:50.707403  334507 ssh_runner.go:195] Run: sudo crictl images --output json
2024/08/15 17:18:55 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-423031 image ls --format json --alsologtostderr:
[{"id":"sha256:97b34c782edb6eb8d9c324de5f94f34e583411f37b900691396da4dcb88a15fa","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-423031"],"size":"992"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics
-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-423031"],"size":"2173567"},{"id":"sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45
ba930d39a9286024d9b2322660","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"18253575"},{"id":"sha256:235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51","repoDigests":["docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40"],"repoTags":["docker.io/library/nginx:latest"],"size":"67647657"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"1648258
1"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a
5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:20742708f697c1a227314130e1e844aa5fb9d18f41fedeb5676120af2df0b3af","repoDigests":[],"repoTags":["localhost/my-image:functional-423031"],"size":"830618"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-423031 image ls --format json --alsologtostderr:
I0815 17:18:50.310712  334475 out.go:345] Setting OutFile to fd 1 ...
I0815 17:18:50.311269  334475 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:18:50.311288  334475 out.go:358] Setting ErrFile to fd 2...
I0815 17:18:50.311293  334475 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:18:50.312679  334475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
I0815 17:18:50.314749  334475 config.go:182] Loaded profile config "functional-423031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 17:18:50.314939  334475 config.go:182] Loaded profile config "functional-423031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 17:18:50.315574  334475 cli_runner.go:164] Run: docker container inspect functional-423031 --format={{.State.Status}}
I0815 17:18:50.335268  334475 ssh_runner.go:195] Run: systemctl --version
I0815 17:18:50.335320  334475 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-423031
I0815 17:18:50.363417  334475 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/functional-423031/id_rsa Username:docker}
I0815 17:18:50.457748  334475 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-423031 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-423031
size: "2173567"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:97b34c782edb6eb8d9c324de5f94f34e583411f37b900691396da4dcb88a15fa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-423031
size: "992"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
repoTags:
- docker.io/library/nginx:alpine
size: "18253575"
- id: sha256:235ff27fe79567e8ccaf4d26a2d24828a65898a83b97fba3c7e39ec4621e1b51
repoDigests:
- docker.io/library/nginx@sha256:98f8ec75657d21b924fe4f69b6b9bff2f6550ea48838af479d8894a852000e40
repoTags:
- docker.io/library/nginx:latest
size: "67647657"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-423031 image ls --format yaml --alsologtostderr:
I0815 17:18:47.263596  334031 out.go:345] Setting OutFile to fd 1 ...
I0815 17:18:47.263790  334031 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:18:47.263816  334031 out.go:358] Setting ErrFile to fd 2...
I0815 17:18:47.263834  334031 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:18:47.264096  334031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
I0815 17:18:47.264729  334031 config.go:182] Loaded profile config "functional-423031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 17:18:47.264896  334031 config.go:182] Loaded profile config "functional-423031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 17:18:47.265430  334031 cli_runner.go:164] Run: docker container inspect functional-423031 --format={{.State.Status}}
I0815 17:18:47.282887  334031 ssh_runner.go:195] Run: systemctl --version
I0815 17:18:47.282939  334031 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-423031
I0815 17:18:47.304234  334031 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/functional-423031/id_rsa Username:docker}
I0815 17:18:47.397987  334031 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-423031 ssh pgrep buildkitd: exit status 1 (254.785366ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image build -t localhost/my-image:functional-423031 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-423031 image build -t localhost/my-image:functional-423031 testdata/build --alsologtostderr: (2.276539732s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-423031 image build -t localhost/my-image:functional-423031 testdata/build --alsologtostderr:
I0815 17:18:47.777115  334134 out.go:345] Setting OutFile to fd 1 ...
I0815 17:18:47.777869  334134 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:18:47.778197  334134 out.go:358] Setting ErrFile to fd 2...
I0815 17:18:47.778223  334134 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0815 17:18:47.778525  334134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
I0815 17:18:47.779346  334134 config.go:182] Loaded profile config "functional-423031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 17:18:47.780751  334134 config.go:182] Loaded profile config "functional-423031": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0815 17:18:47.781316  334134 cli_runner.go:164] Run: docker container inspect functional-423031 --format={{.State.Status}}
I0815 17:18:47.800175  334134 ssh_runner.go:195] Run: systemctl --version
I0815 17:18:47.800227  334134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-423031
I0815 17:18:47.825513  334134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/functional-423031/id_rsa Username:docker}
I0815 17:18:47.917430  334134 build_images.go:161] Building image from path: /tmp/build.1086233311.tar
I0815 17:18:47.917501  334134 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0815 17:18:47.926725  334134 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1086233311.tar
I0815 17:18:47.930215  334134 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1086233311.tar: stat -c "%s %y" /var/lib/minikube/build/build.1086233311.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1086233311.tar': No such file or directory
I0815 17:18:47.930242  334134 ssh_runner.go:362] scp /tmp/build.1086233311.tar --> /var/lib/minikube/build/build.1086233311.tar (3072 bytes)
I0815 17:18:47.956426  334134 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1086233311
I0815 17:18:47.966156  334134 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1086233311 -xf /var/lib/minikube/build/build.1086233311.tar
I0815 17:18:47.975753  334134 containerd.go:394] Building image: /var/lib/minikube/build/build.1086233311
I0815 17:18:47.975838  334134 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1086233311 --local dockerfile=/var/lib/minikube/build/build.1086233311 --output type=image,name=localhost/my-image:functional-423031
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.1s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a56d1adc9010c7dd3166203d61f08594b0edfab71d266e02cb1d95b9e17f81f4
#8 exporting manifest sha256:a56d1adc9010c7dd3166203d61f08594b0edfab71d266e02cb1d95b9e17f81f4 0.0s done
#8 exporting config sha256:20742708f697c1a227314130e1e844aa5fb9d18f41fedeb5676120af2df0b3af 0.0s done
#8 naming to localhost/my-image:functional-423031 done
#8 DONE 0.1s
I0815 17:18:49.939692  334134 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1086233311 --local dockerfile=/var/lib/minikube/build/build.1086233311 --output type=image,name=localhost/my-image:functional-423031: (1.963809982s)
I0815 17:18:49.939773  334134 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1086233311
I0815 17:18:49.955434  334134 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1086233311.tar
I0815 17:18:49.967375  334134 build_images.go:217] Built localhost/my-image:functional-423031 from /tmp/build.1086233311.tar
I0815 17:18:49.967407  334134 build_images.go:133] succeeded building to: functional-423031
I0815 17:18:49.967413  334134 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-423031
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image load --daemon kicbase/echo-server:functional-423031 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-423031 image load --daemon kicbase/echo-server:functional-423031 --alsologtostderr: (1.202579481s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image load --daemon kicbase/echo-server:functional-423031 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-423031 image load --daemon kicbase/echo-server:functional-423031 --alsologtostderr: (1.105416458s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-423031 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-423031 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-mzrtf" [f2b72cc9-2822-48c9-8790-ff9a555a20e7] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-mzrtf" [f2b72cc9-2822-48c9-8790-ff9a555a20e7] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.003981707s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-423031
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image load --daemon kicbase/echo-server:functional-423031 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image save kicbase/echo-server:functional-423031 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image rm kicbase/echo-server:functional-423031 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-423031
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 image save --daemon kicbase/echo-server:functional-423031 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-423031
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-423031 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-423031 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-423031 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 330177: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-423031 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-423031 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-423031 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [883bb0fd-0d8d-4501-ba0f-2fd74a99e184] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [883bb0fd-0d8d-4501-ba0f-2fd74a99e184] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004095594s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 service list -o json
functional_test.go:1494: Took "337.123904ms" to run "out/minikube-linux-arm64 -p functional-423031 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31717
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31717
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-423031 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.236.52 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-423031 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "342.476234ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "58.640031ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "321.994035ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "49.526851ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-423031 /tmp/TestFunctionalparallelMountCmdany-port3057222153/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1723742312253654731" to /tmp/TestFunctionalparallelMountCmdany-port3057222153/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1723742312253654731" to /tmp/TestFunctionalparallelMountCmdany-port3057222153/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1723742312253654731" to /tmp/TestFunctionalparallelMountCmdany-port3057222153/001/test-1723742312253654731
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-423031 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (317.498171ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 15 17:18 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 15 17:18 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 15 17:18 test-1723742312253654731
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh cat /mount-9p/test-1723742312253654731
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-423031 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4e5ca8ac-25a2-4d70-9f29-d74af03f2726] Pending
helpers_test.go:344: "busybox-mount" [4e5ca8ac-25a2-4d70-9f29-d74af03f2726] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4e5ca8ac-25a2-4d70-9f29-d74af03f2726] Running
helpers_test.go:344: "busybox-mount" [4e5ca8ac-25a2-4d70-9f29-d74af03f2726] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4e5ca8ac-25a2-4d70-9f29-d74af03f2726] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.0055429s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-423031 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-423031 /tmp/TestFunctionalparallelMountCmdany-port3057222153/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-423031 /tmp/TestFunctionalparallelMountCmdspecific-port2648926512/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-423031 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (313.167209ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-423031 /tmp/TestFunctionalparallelMountCmdspecific-port2648926512/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-423031 ssh "sudo umount -f /mount-9p": exit status 1 (278.873209ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-423031 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-423031 /tmp/TestFunctionalparallelMountCmdspecific-port2648926512/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-423031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup494660585/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-423031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup494660585/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-423031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup494660585/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-423031 ssh "findmnt -T" /mount1: exit status 1 (548.49256ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-423031 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-423031 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-423031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup494660585/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-423031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup494660585/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-423031 /tmp/TestFunctionalparallelMountCmdVerifyCleanup494660585/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-423031
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-423031
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-423031
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (108.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-426265 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0815 17:19:00.706829  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:00.713583  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:00.724957  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:00.746299  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:00.787662  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:00.869049  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:01.030514  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:01.352146  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:01.994322  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:03.275926  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:05.837872  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:10.959406  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:21.201311  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:19:41.683017  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:20:22.644364  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-426265 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m47.647504083s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (108.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-426265 -- rollout status deployment/busybox: (29.494957929s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-c67hq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-nnkrl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-rnpps -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-c67hq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-nnkrl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-rnpps -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-c67hq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-nnkrl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-rnpps -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-c67hq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-c67hq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-nnkrl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-nnkrl -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-rnpps -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-426265 -- exec busybox-7dff88458-rnpps -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-426265 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-426265 -v=7 --alsologtostderr: (19.984532062s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-426265 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp testdata/cp-test.txt ha-426265:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile117167352/001/cp-test_ha-426265.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265 "sudo cat /home/docker/cp-test.txt"
E0815 17:21:44.566363  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265:/home/docker/cp-test.txt ha-426265-m02:/home/docker/cp-test_ha-426265_ha-426265-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m02 "sudo cat /home/docker/cp-test_ha-426265_ha-426265-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265:/home/docker/cp-test.txt ha-426265-m03:/home/docker/cp-test_ha-426265_ha-426265-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m03 "sudo cat /home/docker/cp-test_ha-426265_ha-426265-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265:/home/docker/cp-test.txt ha-426265-m04:/home/docker/cp-test_ha-426265_ha-426265-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m04 "sudo cat /home/docker/cp-test_ha-426265_ha-426265-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp testdata/cp-test.txt ha-426265-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile117167352/001/cp-test_ha-426265-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265-m02:/home/docker/cp-test.txt ha-426265:/home/docker/cp-test_ha-426265-m02_ha-426265.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265 "sudo cat /home/docker/cp-test_ha-426265-m02_ha-426265.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265-m02:/home/docker/cp-test.txt ha-426265-m03:/home/docker/cp-test_ha-426265-m02_ha-426265-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m03 "sudo cat /home/docker/cp-test_ha-426265-m02_ha-426265-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265-m02:/home/docker/cp-test.txt ha-426265-m04:/home/docker/cp-test_ha-426265-m02_ha-426265-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m04 "sudo cat /home/docker/cp-test_ha-426265-m02_ha-426265-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp testdata/cp-test.txt ha-426265-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile117167352/001/cp-test_ha-426265-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265-m03:/home/docker/cp-test.txt ha-426265:/home/docker/cp-test_ha-426265-m03_ha-426265.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265 "sudo cat /home/docker/cp-test_ha-426265-m03_ha-426265.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265-m03:/home/docker/cp-test.txt ha-426265-m02:/home/docker/cp-test_ha-426265-m03_ha-426265-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m02 "sudo cat /home/docker/cp-test_ha-426265-m03_ha-426265-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265-m03:/home/docker/cp-test.txt ha-426265-m04:/home/docker/cp-test_ha-426265-m03_ha-426265-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m04 "sudo cat /home/docker/cp-test_ha-426265-m03_ha-426265-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp testdata/cp-test.txt ha-426265-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile117167352/001/cp-test_ha-426265-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265-m04:/home/docker/cp-test.txt ha-426265:/home/docker/cp-test_ha-426265-m04_ha-426265.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265 "sudo cat /home/docker/cp-test_ha-426265-m04_ha-426265.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265-m04:/home/docker/cp-test.txt ha-426265-m02:/home/docker/cp-test_ha-426265-m04_ha-426265-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m02 "sudo cat /home/docker/cp-test_ha-426265-m04_ha-426265-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 cp ha-426265-m04:/home/docker/cp-test.txt ha-426265-m03:/home/docker/cp-test_ha-426265-m04_ha-426265-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 ssh -n ha-426265-m03 "sudo cat /home/docker/cp-test_ha-426265-m04_ha-426265-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-426265 node stop m02 -v=7 --alsologtostderr: (12.132856168s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-426265 status -v=7 --alsologtostderr: exit status 7 (738.673238ms)

                                                
                                                
-- stdout --
	ha-426265
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-426265-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-426265-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-426265-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:22:13.917918  350549 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:22:13.918082  350549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:22:13.918103  350549 out.go:358] Setting ErrFile to fd 2...
	I0815 17:22:13.918131  350549 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:22:13.918472  350549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
	I0815 17:22:13.918688  350549 out.go:352] Setting JSON to false
	I0815 17:22:13.918761  350549 mustload.go:65] Loading cluster: ha-426265
	I0815 17:22:13.918836  350549 notify.go:220] Checking for updates...
	I0815 17:22:13.919244  350549 config.go:182] Loaded profile config "ha-426265": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 17:22:13.919286  350549 status.go:255] checking status of ha-426265 ...
	I0815 17:22:13.919786  350549 cli_runner.go:164] Run: docker container inspect ha-426265 --format={{.State.Status}}
	I0815 17:22:13.940076  350549 status.go:330] ha-426265 host status = "Running" (err=<nil>)
	I0815 17:22:13.940109  350549 host.go:66] Checking if "ha-426265" exists ...
	I0815 17:22:13.940425  350549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-426265
	I0815 17:22:13.960720  350549 host.go:66] Checking if "ha-426265" exists ...
	I0815 17:22:13.961346  350549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:22:13.961511  350549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-426265
	I0815 17:22:13.978555  350549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/ha-426265/id_rsa Username:docker}
	I0815 17:22:14.075346  350549 ssh_runner.go:195] Run: systemctl --version
	I0815 17:22:14.080092  350549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:22:14.092828  350549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:22:14.162416  350549 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-15 17:22:14.15202513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:22:14.162982  350549 kubeconfig.go:125] found "ha-426265" server: "https://192.168.49.254:8443"
	I0815 17:22:14.163015  350549 api_server.go:166] Checking apiserver status ...
	I0815 17:22:14.163064  350549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:22:14.174497  350549 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1394/cgroup
	I0815 17:22:14.184132  350549 api_server.go:182] apiserver freezer: "3:freezer:/docker/0d123935977a020118509da53704903746bea58fcb47695b1c599a7df120fb2e/kubepods/burstable/pod1206ac8632be52c7d587d35787dacbee/a800b9f1080ef71002ba4cf142a8213e007229108a2605eb77cd6b35df9c3917"
	I0815 17:22:14.184208  350549 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0d123935977a020118509da53704903746bea58fcb47695b1c599a7df120fb2e/kubepods/burstable/pod1206ac8632be52c7d587d35787dacbee/a800b9f1080ef71002ba4cf142a8213e007229108a2605eb77cd6b35df9c3917/freezer.state
	I0815 17:22:14.193388  350549 api_server.go:204] freezer state: "THAWED"
	I0815 17:22:14.193417  350549 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0815 17:22:14.201368  350549 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0815 17:22:14.201402  350549 status.go:422] ha-426265 apiserver status = Running (err=<nil>)
	I0815 17:22:14.201415  350549 status.go:257] ha-426265 status: &{Name:ha-426265 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:22:14.201435  350549 status.go:255] checking status of ha-426265-m02 ...
	I0815 17:22:14.201768  350549 cli_runner.go:164] Run: docker container inspect ha-426265-m02 --format={{.State.Status}}
	I0815 17:22:14.223157  350549 status.go:330] ha-426265-m02 host status = "Stopped" (err=<nil>)
	I0815 17:22:14.223181  350549 status.go:343] host is not running, skipping remaining checks
	I0815 17:22:14.223188  350549 status.go:257] ha-426265-m02 status: &{Name:ha-426265-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:22:14.223214  350549 status.go:255] checking status of ha-426265-m03 ...
	I0815 17:22:14.223516  350549 cli_runner.go:164] Run: docker container inspect ha-426265-m03 --format={{.State.Status}}
	I0815 17:22:14.240293  350549 status.go:330] ha-426265-m03 host status = "Running" (err=<nil>)
	I0815 17:22:14.240319  350549 host.go:66] Checking if "ha-426265-m03" exists ...
	I0815 17:22:14.240730  350549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-426265-m03
	I0815 17:22:14.258637  350549 host.go:66] Checking if "ha-426265-m03" exists ...
	I0815 17:22:14.258962  350549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:22:14.259001  350549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-426265-m03
	I0815 17:22:14.280597  350549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/ha-426265-m03/id_rsa Username:docker}
	I0815 17:22:14.374774  350549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:22:14.387335  350549 kubeconfig.go:125] found "ha-426265" server: "https://192.168.49.254:8443"
	I0815 17:22:14.387367  350549 api_server.go:166] Checking apiserver status ...
	I0815 17:22:14.387407  350549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:22:14.398682  350549 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1339/cgroup
	I0815 17:22:14.409024  350549 api_server.go:182] apiserver freezer: "3:freezer:/docker/8be1531341a614c8396609456dc9467816a8183804e0fe4d7bee786b83cc73b7/kubepods/burstable/pode4ec1e260f7bb779e8728b6035d4892f/045e48dd6ce7172a55a3984a0ebbf3020878d13f991548e7897be1105d2ac454"
	I0815 17:22:14.409179  350549 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8be1531341a614c8396609456dc9467816a8183804e0fe4d7bee786b83cc73b7/kubepods/burstable/pode4ec1e260f7bb779e8728b6035d4892f/045e48dd6ce7172a55a3984a0ebbf3020878d13f991548e7897be1105d2ac454/freezer.state
	I0815 17:22:14.418280  350549 api_server.go:204] freezer state: "THAWED"
	I0815 17:22:14.418378  350549 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0815 17:22:14.426254  350549 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0815 17:22:14.426281  350549 status.go:422] ha-426265-m03 apiserver status = Running (err=<nil>)
	I0815 17:22:14.426291  350549 status.go:257] ha-426265-m03 status: &{Name:ha-426265-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:22:14.426310  350549 status.go:255] checking status of ha-426265-m04 ...
	I0815 17:22:14.426643  350549 cli_runner.go:164] Run: docker container inspect ha-426265-m04 --format={{.State.Status}}
	I0815 17:22:14.444574  350549 status.go:330] ha-426265-m04 host status = "Running" (err=<nil>)
	I0815 17:22:14.444661  350549 host.go:66] Checking if "ha-426265-m04" exists ...
	I0815 17:22:14.444968  350549 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-426265-m04
	I0815 17:22:14.463578  350549 host.go:66] Checking if "ha-426265-m04" exists ...
	I0815 17:22:14.463899  350549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:22:14.463939  350549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-426265-m04
	I0815 17:22:14.481629  350549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/ha-426265-m04/id_rsa Username:docker}
	I0815 17:22:14.574400  350549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:22:14.589810  350549 status.go:257] ha-426265-m04 status: &{Name:ha-426265-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-426265 node start m02 -v=7 --alsologtostderr: (17.730407473s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-426265 status -v=7 --alsologtostderr: (1.096823443s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-426265 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-426265 -v=7 --alsologtostderr
E0815 17:23:06.571323  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:06.577824  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:06.589221  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:06.610630  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:06.652116  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:06.733609  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:06.895284  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:07.217219  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:07.859421  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:09.140891  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:11.702182  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-426265 -v=7 --alsologtostderr: (37.190072799s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-426265 --wait=true -v=7 --alsologtostderr
E0815 17:23:16.824474  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:27.065749  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:23:47.547362  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:24:00.703662  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:24:28.408299  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:24:28.509056  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-426265 --wait=true -v=7 --alsologtostderr: (1m33.482025843s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-426265
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (130.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-426265 node delete m03 -v=7 --alsologtostderr: (9.73357292s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-426265 stop -v=7 --alsologtostderr: (35.920907063s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-426265 status -v=7 --alsologtostderr: exit status 7 (113.514169ms)

                                                
                                                
-- stdout --
	ha-426265
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-426265-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-426265-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:25:32.858289  364824 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:25:32.858494  364824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:25:32.858519  364824 out.go:358] Setting ErrFile to fd 2...
	I0815 17:25:32.858538  364824 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:25:32.858896  364824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
	I0815 17:25:32.859177  364824 out.go:352] Setting JSON to false
	I0815 17:25:32.859246  364824 mustload.go:65] Loading cluster: ha-426265
	I0815 17:25:32.860065  364824 notify.go:220] Checking for updates...
	I0815 17:25:32.860317  364824 config.go:182] Loaded profile config "ha-426265": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 17:25:32.860347  364824 status.go:255] checking status of ha-426265 ...
	I0815 17:25:32.860852  364824 cli_runner.go:164] Run: docker container inspect ha-426265 --format={{.State.Status}}
	I0815 17:25:32.877199  364824 status.go:330] ha-426265 host status = "Stopped" (err=<nil>)
	I0815 17:25:32.877220  364824 status.go:343] host is not running, skipping remaining checks
	I0815 17:25:32.877228  364824 status.go:257] ha-426265 status: &{Name:ha-426265 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:25:32.877261  364824 status.go:255] checking status of ha-426265-m02 ...
	I0815 17:25:32.877582  364824 cli_runner.go:164] Run: docker container inspect ha-426265-m02 --format={{.State.Status}}
	I0815 17:25:32.894925  364824 status.go:330] ha-426265-m02 host status = "Stopped" (err=<nil>)
	I0815 17:25:32.894947  364824 status.go:343] host is not running, skipping remaining checks
	I0815 17:25:32.894955  364824 status.go:257] ha-426265-m02 status: &{Name:ha-426265-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:25:32.894984  364824 status.go:255] checking status of ha-426265-m04 ...
	I0815 17:25:32.895343  364824 cli_runner.go:164] Run: docker container inspect ha-426265-m04 --format={{.State.Status}}
	I0815 17:25:32.924352  364824 status.go:330] ha-426265-m04 host status = "Stopped" (err=<nil>)
	I0815 17:25:32.924376  364824 status.go:343] host is not running, skipping remaining checks
	I0815 17:25:32.924384  364824 status.go:257] ha-426265-m04 status: &{Name:ha-426265-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (42.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-426265 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0815 17:25:50.430551  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-426265 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (41.122483536s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (42.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-426265 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-426265 --control-plane -v=7 --alsologtostderr: (42.848617861s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-426265 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-426265 status -v=7 --alsologtostderr: (1.016930975s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.095470223s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.10s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.13s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-496605 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-496605 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (51.128888008s)
--- PASS: TestJSONOutput/start/Command (51.13s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.06s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-496605 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 pause -p json-output-496605 --output=json --user=testUser: (1.064270578s)
--- PASS: TestJSONOutput/pause/Command (1.06s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-496605 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-496605 --output=json --user=testUser
E0815 17:28:06.571314  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-496605 --output=json --user=testUser: (5.788383413s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-801990 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-801990 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.241005ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"595af5ac-4bf7-453b-8999-b4be54d8123f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-801990] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"177e303c-9916-45cd-9f60-d459a9c47e0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19450"}}
	{"specversion":"1.0","id":"ebfb73f1-830c-4392-8f48-5fc8f82828b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1367b217-a08e-4d00-a8fc-083ead02d319","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig"}}
	{"specversion":"1.0","id":"00d76bf1-98ee-406c-8478-1670200d899a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube"}}
	{"specversion":"1.0","id":"5aa54a29-743a-4efd-966d-c88409ebc142","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"be5a9e6e-3f08-4947-a835-69eca1648812","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"efbd1317-9e55-4daa-b4e2-cfa5edf36753","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-801990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-801990
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-472398 --network=
E0815 17:28:34.273321  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-472398 --network=: (34.283478284s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-472398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-472398
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-472398: (2.019448201s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.32s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-244273 --network=bridge
E0815 17:29:00.704465  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-244273 --network=bridge: (30.138529656s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-244273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-244273
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-244273: (1.923424934s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.09s)

                                                
                                    
x
+
TestKicExistingNetwork (30.76s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-179867 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-179867 --network=existing-network: (28.613990701s)
helpers_test.go:175: Cleaning up "existing-network-179867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-179867
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-179867: (2.001557611s)
--- PASS: TestKicExistingNetwork (30.76s)

                                                
                                    
x
+
TestKicCustomSubnet (34.56s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-920331 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-920331 --subnet=192.168.60.0/24: (32.530059443s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-920331 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-920331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-920331
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-920331: (2.005758922s)
--- PASS: TestKicCustomSubnet (34.56s)

                                                
                                    
x
+
TestKicStaticIP (38.01s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-133259 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-133259 --static-ip=192.168.200.200: (35.766634595s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-133259 ip
helpers_test.go:175: Cleaning up "static-ip-133259" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-133259
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-133259: (2.076191747s)
--- PASS: TestKicStaticIP (38.01s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (66.12s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-384025 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-384025 --driver=docker  --container-runtime=containerd: (29.543986946s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-386806 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-386806 --driver=docker  --container-runtime=containerd: (31.008143144s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-384025
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-386806
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-386806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-386806
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-386806: (2.056820281s)
helpers_test.go:175: Cleaning up "first-384025" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-384025
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-384025: (2.216911609s)
--- PASS: TestMinikubeProfile (66.12s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-755842 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-755842 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.685473889s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-755842 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.86s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-769626 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-769626 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.858472845s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-769626 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-755842 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-755842 --alsologtostderr -v=5: (1.609368385s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-769626 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-769626
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-769626: (1.189926152s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.47s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-769626
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-769626: (6.467006187s)
--- PASS: TestMountStart/serial/RestartStopped (7.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-769626 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-996865 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0815 17:33:06.571230  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-996865 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.306367512s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.86s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-996865 -- rollout status deployment/busybox: (15.28119021s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- exec busybox-7dff88458-kv7gf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- exec busybox-7dff88458-ls6ms -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- exec busybox-7dff88458-kv7gf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- exec busybox-7dff88458-ls6ms -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- exec busybox-7dff88458-kv7gf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- exec busybox-7dff88458-ls6ms -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- exec busybox-7dff88458-kv7gf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- exec busybox-7dff88458-kv7gf -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- exec busybox-7dff88458-ls6ms -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-996865 -- exec busybox-7dff88458-ls6ms -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-996865 -v 3 --alsologtostderr
E0815 17:34:00.703775  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-996865 -v 3 --alsologtostderr: (18.935581762s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.64s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-996865 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 cp testdata/cp-test.txt multinode-996865:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 cp multinode-996865:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2727368812/001/cp-test_multinode-996865.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 cp multinode-996865:/home/docker/cp-test.txt multinode-996865-m02:/home/docker/cp-test_multinode-996865_multinode-996865-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865-m02 "sudo cat /home/docker/cp-test_multinode-996865_multinode-996865-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 cp multinode-996865:/home/docker/cp-test.txt multinode-996865-m03:/home/docker/cp-test_multinode-996865_multinode-996865-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865-m03 "sudo cat /home/docker/cp-test_multinode-996865_multinode-996865-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 cp testdata/cp-test.txt multinode-996865-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 cp multinode-996865-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2727368812/001/cp-test_multinode-996865-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 cp multinode-996865-m02:/home/docker/cp-test.txt multinode-996865:/home/docker/cp-test_multinode-996865-m02_multinode-996865.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865 "sudo cat /home/docker/cp-test_multinode-996865-m02_multinode-996865.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 cp multinode-996865-m02:/home/docker/cp-test.txt multinode-996865-m03:/home/docker/cp-test_multinode-996865-m02_multinode-996865-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865-m03 "sudo cat /home/docker/cp-test_multinode-996865-m02_multinode-996865-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 cp testdata/cp-test.txt multinode-996865-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 cp multinode-996865-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2727368812/001/cp-test_multinode-996865-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 cp multinode-996865-m03:/home/docker/cp-test.txt multinode-996865:/home/docker/cp-test_multinode-996865-m03_multinode-996865.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865 "sudo cat /home/docker/cp-test_multinode-996865-m03_multinode-996865.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 cp multinode-996865-m03:/home/docker/cp-test.txt multinode-996865-m02:/home/docker/cp-test_multinode-996865-m03_multinode-996865-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 ssh -n multinode-996865-m02 "sudo cat /home/docker/cp-test_multinode-996865-m03_multinode-996865-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.91s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-996865 node stop m03: (1.227045076s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-996865 status: exit status 7 (524.156317ms)

                                                
                                                
-- stdout --
	multinode-996865
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-996865-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-996865-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-996865 status --alsologtostderr: exit status 7 (506.623234ms)

                                                
                                                
-- stdout --
	multinode-996865
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-996865-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-996865-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:34:31.995294  418295 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:34:31.995506  418295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:34:31.995532  418295 out.go:358] Setting ErrFile to fd 2...
	I0815 17:34:31.995552  418295 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:34:31.995823  418295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
	I0815 17:34:31.996064  418295 out.go:352] Setting JSON to false
	I0815 17:34:31.996143  418295 mustload.go:65] Loading cluster: multinode-996865
	I0815 17:34:31.996240  418295 notify.go:220] Checking for updates...
	I0815 17:34:31.996631  418295 config.go:182] Loaded profile config "multinode-996865": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 17:34:31.996678  418295 status.go:255] checking status of multinode-996865 ...
	I0815 17:34:31.997799  418295 cli_runner.go:164] Run: docker container inspect multinode-996865 --format={{.State.Status}}
	I0815 17:34:32.019835  418295 status.go:330] multinode-996865 host status = "Running" (err=<nil>)
	I0815 17:34:32.019865  418295 host.go:66] Checking if "multinode-996865" exists ...
	I0815 17:34:32.020181  418295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-996865
	I0815 17:34:32.048655  418295 host.go:66] Checking if "multinode-996865" exists ...
	I0815 17:34:32.048953  418295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:34:32.048994  418295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-996865
	I0815 17:34:32.067668  418295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/multinode-996865/id_rsa Username:docker}
	I0815 17:34:32.162748  418295 ssh_runner.go:195] Run: systemctl --version
	I0815 17:34:32.167415  418295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:34:32.179699  418295 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:34:32.231557  418295 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-15 17:34:32.221645001 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:34:32.232131  418295 kubeconfig.go:125] found "multinode-996865" server: "https://192.168.67.2:8443"
	I0815 17:34:32.232169  418295 api_server.go:166] Checking apiserver status ...
	I0815 17:34:32.232221  418295 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0815 17:34:32.243599  418295 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1411/cgroup
	I0815 17:34:32.252977  418295 api_server.go:182] apiserver freezer: "3:freezer:/docker/54c1fdcf6e351d3290343f78fe0ced3bd314c11c6d3ee121f48c158efef262df/kubepods/burstable/pod29ced7f1de862b696ef579b01e62f2cc/40b3b3173599565df3c50a7f6e21adde1cbaaa7f7b18d5edfa684ee63ef3590e"
	I0815 17:34:32.253050  418295 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/54c1fdcf6e351d3290343f78fe0ced3bd314c11c6d3ee121f48c158efef262df/kubepods/burstable/pod29ced7f1de862b696ef579b01e62f2cc/40b3b3173599565df3c50a7f6e21adde1cbaaa7f7b18d5edfa684ee63ef3590e/freezer.state
	I0815 17:34:32.262342  418295 api_server.go:204] freezer state: "THAWED"
	I0815 17:34:32.262371  418295 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0815 17:34:32.270170  418295 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0815 17:34:32.270201  418295 status.go:422] multinode-996865 apiserver status = Running (err=<nil>)
	I0815 17:34:32.270214  418295 status.go:257] multinode-996865 status: &{Name:multinode-996865 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:34:32.270233  418295 status.go:255] checking status of multinode-996865-m02 ...
	I0815 17:34:32.270616  418295 cli_runner.go:164] Run: docker container inspect multinode-996865-m02 --format={{.State.Status}}
	I0815 17:34:32.288571  418295 status.go:330] multinode-996865-m02 host status = "Running" (err=<nil>)
	I0815 17:34:32.288600  418295 host.go:66] Checking if "multinode-996865-m02" exists ...
	I0815 17:34:32.288915  418295 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-996865-m02
	I0815 17:34:32.305767  418295 host.go:66] Checking if "multinode-996865-m02" exists ...
	I0815 17:34:32.306100  418295 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0815 17:34:32.306149  418295 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-996865-m02
	I0815 17:34:32.322733  418295 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/19450-292730/.minikube/machines/multinode-996865-m02/id_rsa Username:docker}
	I0815 17:34:32.414011  418295 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0815 17:34:32.425999  418295 status.go:257] multinode-996865-m02 status: &{Name:multinode-996865-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:34:32.426034  418295 status.go:255] checking status of multinode-996865-m03 ...
	I0815 17:34:32.426351  418295 cli_runner.go:164] Run: docker container inspect multinode-996865-m03 --format={{.State.Status}}
	I0815 17:34:32.442663  418295 status.go:330] multinode-996865-m03 host status = "Stopped" (err=<nil>)
	I0815 17:34:32.442695  418295 status.go:343] host is not running, skipping remaining checks
	I0815 17:34:32.442703  418295 status.go:257] multinode-996865-m03 status: &{Name:multinode-996865-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-996865 node start m03 -v=7 --alsologtostderr: (9.097214209s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (90.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-996865
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-996865
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-996865: (25.140159915s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-996865 --wait=true -v=8 --alsologtostderr
E0815 17:35:23.769644  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-996865 --wait=true -v=8 --alsologtostderr: (1m4.874789316s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-996865
--- PASS: TestMultiNode/serial/RestartKeepsNodes (90.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-996865 node delete m03: (4.829156076s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-996865 stop: (23.86813823s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-996865 status: exit status 7 (90.689666ms)

                                                
                                                
-- stdout --
	multinode-996865
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-996865-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-996865 status --alsologtostderr: exit status 7 (100.801543ms)

                                                
                                                
-- stdout --
	multinode-996865
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-996865-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:36:41.987407  426780 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:36:41.987560  426780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:41.987587  426780 out.go:358] Setting ErrFile to fd 2...
	I0815 17:36:41.987609  426780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:36:41.987860  426780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
	I0815 17:36:41.988078  426780 out.go:352] Setting JSON to false
	I0815 17:36:41.988128  426780 mustload.go:65] Loading cluster: multinode-996865
	I0815 17:36:41.988221  426780 notify.go:220] Checking for updates...
	I0815 17:36:41.988631  426780 config.go:182] Loaded profile config "multinode-996865": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 17:36:41.988654  426780 status.go:255] checking status of multinode-996865 ...
	I0815 17:36:41.989535  426780 cli_runner.go:164] Run: docker container inspect multinode-996865 --format={{.State.Status}}
	I0815 17:36:42.012003  426780 status.go:330] multinode-996865 host status = "Stopped" (err=<nil>)
	I0815 17:36:42.012034  426780 status.go:343] host is not running, skipping remaining checks
	I0815 17:36:42.012043  426780 status.go:257] multinode-996865 status: &{Name:multinode-996865 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0815 17:36:42.012074  426780 status.go:255] checking status of multinode-996865-m02 ...
	I0815 17:36:42.012404  426780 cli_runner.go:164] Run: docker container inspect multinode-996865-m02 --format={{.State.Status}}
	I0815 17:36:42.043830  426780 status.go:330] multinode-996865-m02 host status = "Stopped" (err=<nil>)
	I0815 17:36:42.043859  426780 status.go:343] host is not running, skipping remaining checks
	I0815 17:36:42.043880  426780 status.go:257] multinode-996865-m02 status: &{Name:multinode-996865-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-996865 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-996865 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.511150173s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-996865 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.17s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-996865
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-996865-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-996865-m02 --driver=docker  --container-runtime=containerd: exit status 14 (82.256259ms)

                                                
                                                
-- stdout --
	* [multinode-996865-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-996865-m02' is duplicated with machine name 'multinode-996865-m02' in profile 'multinode-996865'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-996865-m03 --driver=docker  --container-runtime=containerd
E0815 17:38:06.570934  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-996865-m03 --driver=docker  --container-runtime=containerd: (32.57792511s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-996865
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-996865: exit status 80 (323.599358ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-996865 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-996865-m03 already exists in multinode-996865-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-996865-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-996865-m03: (1.972590091s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.01s)

                                                
                                    
x
+
TestPreload (132.56s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-585976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0815 17:39:00.703832  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:39:29.635384  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-585976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m36.091774054s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-585976 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-585976 image pull gcr.io/k8s-minikube/busybox: (1.151745863s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-585976
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-585976: (12.091419121s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-585976 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-585976 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.286546355s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-585976 image list
helpers_test.go:175: Cleaning up "test-preload-585976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-585976
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-585976: (2.597090403s)
--- PASS: TestPreload (132.56s)

                                                
                                    
x
+
TestScheduledStopUnix (107.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-481799 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-481799 --memory=2048 --driver=docker  --container-runtime=containerd: (30.719147623s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-481799 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-481799 -n scheduled-stop-481799
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-481799 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-481799 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-481799 -n scheduled-stop-481799
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-481799
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-481799 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-481799
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-481799: exit status 7 (69.945938ms)

                                                
                                                
-- stdout --
	scheduled-stop-481799
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-481799 -n scheduled-stop-481799
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-481799 -n scheduled-stop-481799: exit status 7 (67.1831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-481799" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-481799
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-481799: (4.771115386s)
--- PASS: TestScheduledStopUnix (107.02s)

                                                
                                    
x
+
TestInsufficientStorage (10.05s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-656311 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-656311 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.595271829s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"23a4f732-0953-4c70-b0cf-aae6f92089cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-656311] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"87960ef5-76cc-4b29-be64-bd0e950df401","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19450"}}
	{"specversion":"1.0","id":"78a0f046-9f04-46db-9184-53229eeb5278","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3fc52def-4b97-4737-9d37-e88b6ba470ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig"}}
	{"specversion":"1.0","id":"10b877f2-f118-4add-bd75-3ccf0784a34f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube"}}
	{"specversion":"1.0","id":"b6eabdff-5f1c-4e98-b5f4-bc41ccf832d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"efc820ba-7383-45ce-91b7-b6957bb44bd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"99d19a0f-7ad0-4479-8ac5-e93357b09267","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e0e25489-a990-4d8d-a44a-76d24e79d55e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"92882153-3098-498c-9a83-63e66728f96c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"26883e8e-f03f-4ffe-9f5b-2067766f2622","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"87aa92af-2555-41fe-9200-8e988e1cc5d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-656311\" primary control-plane node in \"insufficient-storage-656311\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"9b4b2791-b1b9-4f17-a725-84d12ecab738","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723650208-19443 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"44788417-6d0c-44bc-abc9-bfffea0bf930","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"85f57dff-6249-4435-9b93-c00e3a1323de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-656311 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-656311 --output=json --layout=cluster: exit status 7 (302.864487ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-656311","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-656311","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 17:42:21.593389  445534 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-656311" does not appear in /home/jenkins/minikube-integration/19450-292730/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-656311 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-656311 --output=json --layout=cluster: exit status 7 (284.018659ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-656311","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-656311","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0815 17:42:21.879342  445594 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-656311" does not appear in /home/jenkins/minikube-integration/19450-292730/kubeconfig
	E0815 17:42:21.889715  445594 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/insufficient-storage-656311/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-656311" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-656311
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-656311: (1.862502736s)
--- PASS: TestInsufficientStorage (10.05s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (83.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.525322108 start -p running-upgrade-858810 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.525322108 start -p running-upgrade-858810 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (38.411567745s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-858810 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-858810 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.117091048s)
helpers_test.go:175: Cleaning up "running-upgrade-858810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-858810
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-858810: (3.385311353s)
--- PASS: TestRunningBinaryUpgrade (83.54s)

                                                
                                    
x
+
TestKubernetesUpgrade (103.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-728540 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-728540 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.7818862s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-728540
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-728540: (1.296422244s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-728540 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-728540 status --format={{.Host}}: exit status 7 (111.1679ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-728540 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-728540 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.812023673s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-728540 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-728540 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-728540 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (132.38115ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-728540] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-728540
	    minikube start -p kubernetes-upgrade-728540 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7285402 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-728540 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-728540 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-728540 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.094977384s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-728540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-728540
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-728540: (2.262340354s)
--- PASS: TestKubernetesUpgrade (103.61s)

                                                
                                    
x
+
TestMissingContainerUpgrade (164.45s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3681066820 start -p missing-upgrade-273601 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3681066820 start -p missing-upgrade-273601 --memory=2200 --driver=docker  --container-runtime=containerd: (1m30.752861808s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-273601
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-273601
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-273601 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0815 17:44:00.703837  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-273601 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m8.77277064s)
helpers_test.go:175: Cleaning up "missing-upgrade-273601" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-273601
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-273601: (2.713668012s)
--- PASS: TestMissingContainerUpgrade (164.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-461515 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-461515 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (77.546867ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-461515] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-461515 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-461515 --driver=docker  --container-runtime=containerd: (41.200831792s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-461515 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-461515 --no-kubernetes --driver=docker  --container-runtime=containerd
E0815 17:43:06.571017  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-461515 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.960425837s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-461515 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-461515 status -o json: exit status 2 (276.969441ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-461515","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-461515
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-461515: (1.859190683s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-461515 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-461515 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.701264974s)
--- PASS: TestNoKubernetes/serial/Start (5.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-461515 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-461515 "sudo systemctl is-active --quiet service kubelet": exit status 1 (288.899201ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-461515
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-461515: (1.24303875s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-461515 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-461515 --driver=docker  --container-runtime=containerd: (7.038117663s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-461515 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-461515 "sudo systemctl is-active --quiet service kubelet": exit status 1 (327.37042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.87s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.87s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1357664243 start -p stopped-upgrade-336471 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1357664243 start -p stopped-upgrade-336471 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (52.03684241s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1357664243 -p stopped-upgrade-336471 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1357664243 -p stopped-upgrade-336471 stop: (20.097729579s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-336471 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-336471 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (47.803101847s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.94s)

                                                
                                    
x
+
TestPause/serial/Start (59.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-160037 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-160037 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (59.358660018s)
--- PASS: TestPause/serial/Start (59.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-336471
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-336471: (1.235250113s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.6s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-160037 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-160037 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.581861613s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.60s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-160037 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-998731 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-998731 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (230.252286ms)

                                                
                                                
-- stdout --
	* [false-998731] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19450
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0815 17:47:55.708815  481004 out.go:345] Setting OutFile to fd 1 ...
	I0815 17:47:55.709075  481004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:47:55.709104  481004 out.go:358] Setting ErrFile to fd 2...
	I0815 17:47:55.709122  481004 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0815 17:47:55.709425  481004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19450-292730/.minikube/bin
	I0815 17:47:55.709874  481004 out.go:352] Setting JSON to false
	I0815 17:47:55.710878  481004 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9019,"bootTime":1723735057,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0815 17:47:55.710982  481004 start.go:139] virtualization:  
	I0815 17:47:55.714364  481004 out.go:177] * [false-998731] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0815 17:47:55.716015  481004 out.go:177]   - MINIKUBE_LOCATION=19450
	I0815 17:47:55.716152  481004 notify.go:220] Checking for updates...
	I0815 17:47:55.719716  481004 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0815 17:47:55.722299  481004 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19450-292730/kubeconfig
	I0815 17:47:55.723975  481004 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19450-292730/.minikube
	I0815 17:47:55.725777  481004 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0815 17:47:55.727565  481004 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0815 17:47:55.730160  481004 config.go:182] Loaded profile config "pause-160037": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0815 17:47:55.730271  481004 driver.go:392] Setting default libvirt URI to qemu:///system
	I0815 17:47:55.768843  481004 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0815 17:47:55.769037  481004 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0815 17:47:55.848599  481004 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:51 SystemTime:2024-08-15 17:47:55.838381296 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0815 17:47:55.848717  481004 docker.go:307] overlay module found
	I0815 17:47:55.851159  481004 out.go:177] * Using the docker driver based on user configuration
	I0815 17:47:55.853632  481004 start.go:297] selected driver: docker
	I0815 17:47:55.853654  481004 start.go:901] validating driver "docker" against <nil>
	I0815 17:47:55.853668  481004 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0815 17:47:55.857182  481004 out.go:201] 
	W0815 17:47:55.861411  481004 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0815 17:47:55.863096  481004 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-998731 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-998731

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-998731

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-998731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-998731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-998731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-998731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-998731

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-998731

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-998731

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-998731

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-998731

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-998731" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-998731" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19450-292730/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 15 Aug 2024 17:47:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-160037
contexts:
- context:
cluster: pause-160037
extensions:
- extension:
last-update: Thu, 15 Aug 2024 17:47:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-160037
name: pause-160037
current-context: pause-160037
kind: Config
preferences: {}
users:
- name: pause-160037
user:
client-certificate: /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/pause-160037/client.crt
client-key: /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/pause-160037/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-998731

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-998731"

                                                
                                                
----------------------- debugLogs end: false-998731 [took: 4.826728979s] --------------------------------
helpers_test.go:175: Cleaning up "false-998731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-998731
--- PASS: TestNetworkPlugins/group/false (5.67s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-160037 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-160037 --output=json --layout=cluster: exit status 2 (377.282197ms)

                                                
                                                
-- stdout --
	{"Name":"pause-160037","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-160037","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-160037 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.88s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.12s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-160037 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-160037 --alsologtostderr -v=5: (1.115505786s)
--- PASS: TestPause/serial/PauseAgain (1.12s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-160037 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-160037 --alsologtostderr -v=5: (2.983017933s)
--- PASS: TestPause/serial/DeletePaused (2.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-160037
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-160037: exit status 1 (16.010411ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-160037: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (163.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-460705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-460705 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m43.332195223s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (163.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-794171 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-794171 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m12.536677649s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-460705 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d4c999ca-75f6-44e8-a2dc-2d109c00e1a9] Pending
helpers_test.go:344: "busybox" [d4c999ca-75f6-44e8-a2dc-2d109c00e1a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d4c999ca-75f6-44e8-a2dc-2d109c00e1a9] Running
E0815 17:52:03.771380  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004591377s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-460705 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-460705 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-460705 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.502901629s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-460705 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.47s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-460705 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-460705 --alsologtostderr -v=3: (12.467975633s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-460705 -n old-k8s-version-460705
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-460705 -n old-k8s-version-460705: exit status 7 (86.971195ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-460705 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-794171 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [869aca97-27e0-46f6-b02c-0dee9f527ce6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0815 17:53:06.571567  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [869aca97-27e0-46f6-b02c-0dee9f527ce6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004088476s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-794171 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-794171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-794171 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.491197582s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-794171 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-794171 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-794171 --alsologtostderr -v=3: (12.286835082s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-794171 -n no-preload-794171
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-794171 -n no-preload-794171: exit status 7 (78.242714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-794171 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-794171 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0815 17:54:00.703915  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
E0815 17:56:09.637262  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-794171 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m28.408790053s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-794171 -n no-preload-794171
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t75vc" [01eeb984-3035-4b47-81d3-6dfcf29f6d9b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003164659s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t75vc" [01eeb984-3035-4b47-81d3-6dfcf29f6d9b] Running
E0815 17:58:06.571713  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004775364s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-794171 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-794171 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-794171 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-794171 -n no-preload-794171
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-794171 -n no-preload-794171: exit status 2 (339.913212ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-794171 -n no-preload-794171
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-794171 -n no-preload-794171: exit status 2 (354.43167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-794171 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-794171 -n no-preload-794171
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-794171 -n no-preload-794171
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (53.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-918291 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-918291 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (53.78568507s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (53.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-55s8p" [c498d6d4-72dc-4d8b-8010-7575cf8a1941] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005084551s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-55s8p" [c498d6d4-72dc-4d8b-8010-7575cf8a1941] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004805754s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-460705 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-460705 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-460705 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-460705 -n old-k8s-version-460705
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-460705 -n old-k8s-version-460705: exit status 2 (341.936973ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-460705 -n old-k8s-version-460705
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-460705 -n old-k8s-version-460705: exit status 2 (315.375937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-460705 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-460705 -n old-k8s-version-460705
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-460705 -n old-k8s-version-460705
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-557940 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-557940 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (55.445522673s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-918291 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f5e73b85-6ae8-4631-9d4a-36c3082d121d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f5e73b85-6ae8-4631-9d4a-36c3082d121d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.013217851s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-918291 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-918291 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-918291 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.359510483s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-918291 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-918291 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-918291 --alsologtostderr -v=3: (12.346983118s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-918291 -n embed-certs-918291
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-918291 -n embed-certs-918291: exit status 7 (154.653603ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-918291 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (270.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-918291 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-918291 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m29.865259021s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-918291 -n embed-certs-918291
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (270.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-557940 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4589d221-f735-4f17-93db-8062cd1cb391] Pending
helpers_test.go:344: "busybox" [4589d221-f735-4f17-93db-8062cd1cb391] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4589d221-f735-4f17-93db-8062cd1cb391] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.003716538s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-557940 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-557940 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-557940 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.081295497s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-557940 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-557940 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-557940 --alsologtostderr -v=3: (12.078245992s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-557940 -n default-k8s-diff-port-557940
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-557940 -n default-k8s-diff-port-557940: exit status 7 (69.38261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-557940 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (278.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-557940 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0815 18:01:59.741867  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:01:59.748246  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:01:59.759713  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:01:59.781218  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:01:59.822652  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:01:59.904188  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:02:00.065839  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:02:00.388576  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:02:01.030945  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:02:02.312860  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:02:04.875022  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:02:09.996497  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:02:20.237891  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:02:40.719607  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:05.441470  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:05.447994  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:05.459430  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:05.480810  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:05.522187  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:05.603655  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:05.765268  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:06.086784  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:06.571685  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:06.728138  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:08.009994  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:10.571940  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:15.694048  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:21.681546  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:25.935865  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:03:46.418222  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-557940 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m37.426270926s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-557940 -n default-k8s-diff-port-557940
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (278.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nkfz5" [1de7540a-94a3-4450-abee-6a922550378c] Running
E0815 18:04:00.704424  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/addons-773218/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004750384s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-nkfz5" [1de7540a-94a3-4450-abee-6a922550378c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00340157s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-918291 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-918291 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-918291 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-918291 -n embed-certs-918291
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-918291 -n embed-certs-918291: exit status 2 (309.227328ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-918291 -n embed-certs-918291
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-918291 -n embed-certs-918291: exit status 2 (325.280287ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-918291 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-918291 -n embed-certs-918291
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-918291 -n embed-certs-918291
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-928538 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0815 18:04:27.380057  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:04:43.603238  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-928538 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (36.434523424s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-928538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-928538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.628731532s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-928538 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-928538 --alsologtostderr -v=3: (1.301884498s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-928538 -n newest-cni-928538
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-928538 -n newest-cni-928538: exit status 7 (70.661209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-928538 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-928538 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-928538 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (13.709483795s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-928538 -n newest-cni-928538
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ctpxz" [24c9de39-8aec-4af7-8113-830c71fdb5e9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004851346s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-ctpxz" [24c9de39-8aec-4af7-8113-830c71fdb5e9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004356219s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-557940 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-557940 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-928538 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-557940 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-557940 --alsologtostderr -v=1: (1.270466963s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-557940 -n default-k8s-diff-port-557940
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-557940 -n default-k8s-diff-port-557940: exit status 2 (442.459598ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-557940 -n default-k8s-diff-port-557940
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-557940 -n default-k8s-diff-port-557940: exit status 2 (417.352196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-557940 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-557940 -n default-k8s-diff-port-557940
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-557940 -n default-k8s-diff-port-557940
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-928538 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-928538 --alsologtostderr -v=1: (1.214965045s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-928538 -n newest-cni-928538
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-928538 -n newest-cni-928538: exit status 2 (410.621339ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-928538 -n newest-cni-928538
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-928538 -n newest-cni-928538: exit status 2 (375.671984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-928538 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-928538 --alsologtostderr -v=1: (1.079141972s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-928538 -n newest-cni-928538
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-928538 -n newest-cni-928538
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.68s)
E0815 18:11:19.765340  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:32.802360  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:32.808819  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:32.820222  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:32.841648  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:32.883046  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:32.964493  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:33.126081  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:33.447693  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:33.874684  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:33.881069  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:33.892500  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:33.914706  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:33.956158  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:34.037785  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:34.089289  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:34.199761  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:34.521228  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:35.165574  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:35.371351  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:36.446906  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:37.932928  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:39.009356  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:43.054798  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:44.131422  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:53.296724  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/kindnet-998731/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:11:54.373451  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/auto-998731/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m12.864153736s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (72.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0815 18:05:49.302326  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m12.1079231s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (72.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-klfw4" [00f29144-52c7-41a9-804c-32aa87ccfc2a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004024581s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-998731 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-998731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c2gjp" [db322180-b650-4917-84d8-babf58df4da8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c2gjp" [db322180-b650-4917-84d8-babf58df4da8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00456513s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-998731 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-998731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zlpkl" [2dd65add-bf33-465c-b52f-c7cef01cbdc0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zlpkl" [2dd65add-bf33-465c-b52f-c7cef01cbdc0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.004246106s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-998731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-998731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m12.50935426s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0815 18:07:27.445572  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:08:05.441665  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:08:06.571080  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/functional-423031/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (58.353987108s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-998731 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-998731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v6j4l" [e106b352-7f44-49d2-ab97-3f1eb57fad5e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-v6j4l" [e106b352-7f44-49d2-ab97-3f1eb57fad5e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003664134s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7c8t5" [2d59ca54-f056-4468-92db-93ed81775ef6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003789448s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-998731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-998731 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-998731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n27rs" [b6068024-677e-4430-94b6-2c6a23458e2e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n27rs" [b6068024-677e-4430-94b6-2c6a23458e2e] Running
E0815 18:08:33.144488  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/no-preload-794171/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004296246s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-998731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m22.134400656s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0815 18:09:57.821341  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:09:57.827782  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:09:57.839288  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:09:57.860765  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:09:57.902254  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:09:57.983706  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:09:58.145786  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:09:58.467266  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:09:59.109327  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
E0815 18:10:00.396217  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (58.1278085s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hgvkf" [1ec4ada8-1f19-4d12-9ef8-93b2f65bfbd7] Running
E0815 18:10:02.958402  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004533059s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-998731 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-998731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9cktn" [aefa0d7f-48c6-4568-9718-057afa0f840f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0815 18:10:08.080375  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/default-k8s-diff-port-557940/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-9cktn" [aefa0d7f-48c6-4568-9718-057afa0f840f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00417784s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-998731 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-998731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kc4dr" [81b151ab-b195-4829-a033-93b7145aa300] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kc4dr" [81b151ab-b195-4829-a033-93b7145aa300] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003353601s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-998731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-998731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (75.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-998731 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m15.801038085s)
--- PASS: TestNetworkPlugins/group/bridge/Start (75.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-998731 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-998731 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bxc9r" [a731f453-b781-4dff-beea-948650a582db] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0815 18:11:59.742221  298130 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/old-k8s-version-460705/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-bxc9r" [a731f453-b781-4dff-beea-948650a582db] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004403844s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-998731 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-998731 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-863236 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-863236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-863236
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-405484" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-405484
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-998731 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-998731

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-998731

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-998731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-998731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-998731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-998731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-998731

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-998731

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-998731

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-998731

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-998731

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-998731" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-998731" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19450-292730/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 15 Aug 2024 17:47:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: pause-160037
contexts:
- context:
cluster: pause-160037
extensions:
- extension:
last-update: Thu, 15 Aug 2024 17:47:53 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: pause-160037
name: pause-160037
current-context: pause-160037
kind: Config
preferences: {}
users:
- name: pause-160037
user:
client-certificate: /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/pause-160037/client.crt
client-key: /home/jenkins/minikube-integration/19450-292730/.minikube/profiles/pause-160037/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-998731

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-998731"

                                                
                                                
----------------------- debugLogs end: kubenet-998731 [took: 3.969846131s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-998731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-998731
--- SKIP: TestNetworkPlugins/group/kubenet (4.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-998731 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-998731" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-998731

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-998731" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-998731"

                                                
                                                
----------------------- debugLogs end: cilium-998731 [took: 4.937678784s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-998731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-998731
--- SKIP: TestNetworkPlugins/group/cilium (5.15s)

                                                
                                    
Copied to clipboard