Test Report: Docker_Linux_containerd_arm64 19423

                    
                      7f7446252791c927139509879c70af875912dc64:2024-08-18:35842
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 199.96
302 TestStartStop/group/old-k8s-version/serial/SecondStart 376.68
x
+
TestAddons/serial/Volcano (199.96s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 52.208828ms
addons_test.go:897: volcano-scheduler stabilized in 52.634788ms
addons_test.go:913: volcano-controller stabilized in 53.128834ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-gpg4k" [8c7abe3a-473d-4a72-9925-2fecb4a0a45c] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.011533169s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-xt6vs" [d3558cc6-861b-4ef5-9210-716401a4fc56] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003921719s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-dnrr5" [5832215d-3554-4802-b536-84b020d61a2f] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003679411s
addons_test.go:932: (dbg) Run:  kubectl --context addons-677874 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-677874 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-677874 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e0d4a888-d4be-4204-8181-d5039d53e677] Pending
helpers_test.go:344: "test-job-nginx-0" [e0d4a888-d4be-4204-8181-d5039d53e677] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-677874 -n addons-677874
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-18 18:45:12.055246768 +0000 UTC m=+427.217572643
addons_test.go:964: (dbg) Run:  kubectl --context addons-677874 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-677874 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-0541d50d-d699-4d0b-abe2-af1bb4e120da
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zklv9 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-zklv9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-677874 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-677874 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-677874
helpers_test.go:235: (dbg) docker inspect addons-677874:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6812ccada09ddb5886fc85be84700ee979e597714afe4ca8c7b116ed2a60d134",
	        "Created": "2024-08-18T18:38:43.477657002Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 160805,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-18T18:38:43.6040217Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/6812ccada09ddb5886fc85be84700ee979e597714afe4ca8c7b116ed2a60d134/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6812ccada09ddb5886fc85be84700ee979e597714afe4ca8c7b116ed2a60d134/hostname",
	        "HostsPath": "/var/lib/docker/containers/6812ccada09ddb5886fc85be84700ee979e597714afe4ca8c7b116ed2a60d134/hosts",
	        "LogPath": "/var/lib/docker/containers/6812ccada09ddb5886fc85be84700ee979e597714afe4ca8c7b116ed2a60d134/6812ccada09ddb5886fc85be84700ee979e597714afe4ca8c7b116ed2a60d134-json.log",
	        "Name": "/addons-677874",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-677874:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-677874",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/65b59877f902a4541b5caea25e8bfa4fe8bb8c6d9e867a863e1533751e95bc42-init/diff:/var/lib/docker/overlay2/335569924eb2f5a2927a3aad525f5945de522a21e4174960fd450e8e86ba9355/diff",
	                "MergedDir": "/var/lib/docker/overlay2/65b59877f902a4541b5caea25e8bfa4fe8bb8c6d9e867a863e1533751e95bc42/merged",
	                "UpperDir": "/var/lib/docker/overlay2/65b59877f902a4541b5caea25e8bfa4fe8bb8c6d9e867a863e1533751e95bc42/diff",
	                "WorkDir": "/var/lib/docker/overlay2/65b59877f902a4541b5caea25e8bfa4fe8bb8c6d9e867a863e1533751e95bc42/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-677874",
	                "Source": "/var/lib/docker/volumes/addons-677874/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-677874",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-677874",
	                "name.minikube.sigs.k8s.io": "addons-677874",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fb9dc845bd131a8a9bdc137ce798f9400f5910a6e0d23cc12f10c55c2e90abd1",
	            "SandboxKey": "/var/run/docker/netns/fb9dc845bd13",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38317"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38318"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38321"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38319"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38320"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-677874": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "990036cf47044d31b9f9fabf3f5ed5d42add59e8b1df24d07625bd52e7705c68",
	                    "EndpointID": "9d2932beef0014345695f2f62f148d46c777741d89079022119f861dd0e7517e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-677874",
	                        "6812ccada09d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-677874 -n addons-677874
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-677874 logs -n 25: (1.56246912s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-591709   | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC |                     |
	|         | -p download-only-591709              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:38 UTC |
	| delete  | -p download-only-591709              | download-only-591709   | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:38 UTC |
	| start   | -o=json --download-only              | download-only-171941   | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC |                     |
	|         | -p download-only-171941              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:38 UTC |
	| delete  | -p download-only-171941              | download-only-171941   | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:38 UTC |
	| delete  | -p download-only-591709              | download-only-591709   | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:38 UTC |
	| delete  | -p download-only-171941              | download-only-171941   | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:38 UTC |
	| start   | --download-only -p                   | download-docker-520748 | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC |                     |
	|         | download-docker-520748               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-520748            | download-docker-520748 | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:38 UTC |
	| start   | --download-only -p                   | binary-mirror-829476   | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC |                     |
	|         | binary-mirror-829476                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:41021               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-829476              | binary-mirror-829476   | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:38 UTC |
	| addons  | enable dashboard -p                  | addons-677874          | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC |                     |
	|         | addons-677874                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-677874          | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC |                     |
	|         | addons-677874                        |                        |         |         |                     |                     |
	| start   | -p addons-677874 --wait=true         | addons-677874          | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:41 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 18:38:19
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 18:38:19.409834  160317 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:38:19.410010  160317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:38:19.410023  160317 out.go:358] Setting ErrFile to fd 2...
	I0818 18:38:19.410028  160317 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:38:19.410291  160317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
	I0818 18:38:19.410748  160317 out.go:352] Setting JSON to false
	I0818 18:38:19.411619  160317 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":98444,"bootTime":1723907856,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0818 18:38:19.411685  160317 start.go:139] virtualization:  
	I0818 18:38:19.413652  160317 out.go:177] * [addons-677874] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0818 18:38:19.415549  160317 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 18:38:19.415652  160317 notify.go:220] Checking for updates...
	I0818 18:38:19.418503  160317 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:38:19.420258  160317 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	I0818 18:38:19.422088  160317 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	I0818 18:38:19.423709  160317 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0818 18:38:19.425148  160317 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 18:38:19.427032  160317 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:38:19.447211  160317 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0818 18:38:19.447329  160317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 18:38:19.519754  160317 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-18 18:38:19.510248865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 18:38:19.519938  160317 docker.go:307] overlay module found
	I0818 18:38:19.521736  160317 out.go:177] * Using the docker driver based on user configuration
	I0818 18:38:19.523425  160317 start.go:297] selected driver: docker
	I0818 18:38:19.523458  160317 start.go:901] validating driver "docker" against <nil>
	I0818 18:38:19.523477  160317 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 18:38:19.524159  160317 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 18:38:19.582371  160317 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-18 18:38:19.57273462 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 18:38:19.582605  160317 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 18:38:19.582888  160317 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:38:19.584943  160317 out.go:177] * Using Docker driver with root privileges
	I0818 18:38:19.586606  160317 cni.go:84] Creating CNI manager for ""
	I0818 18:38:19.586629  160317 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0818 18:38:19.586639  160317 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0818 18:38:19.586734  160317 start.go:340] cluster config:
	{Name:addons-677874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-677874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:38:19.589036  160317 out.go:177] * Starting "addons-677874" primary control-plane node in "addons-677874" cluster
	I0818 18:38:19.590840  160317 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0818 18:38:19.592374  160317 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0818 18:38:19.594089  160317 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0818 18:38:19.594133  160317 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0818 18:38:19.594191  160317 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0818 18:38:19.594201  160317 cache.go:56] Caching tarball of preloaded images
	I0818 18:38:19.594275  160317 preload.go:172] Found /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 18:38:19.594282  160317 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0818 18:38:19.594614  160317 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/config.json ...
	I0818 18:38:19.594634  160317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/config.json: {Name:mk1df3a45d627564a507549cefa199da7a6ac65c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:19.609389  160317 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0818 18:38:19.609508  160317 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0818 18:38:19.609533  160317 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0818 18:38:19.609540  160317 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0818 18:38:19.609553  160317 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0818 18:38:19.609559  160317 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0818 18:38:36.390563  160317 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0818 18:38:36.390607  160317 cache.go:194] Successfully downloaded all kic artifacts
	I0818 18:38:36.390653  160317 start.go:360] acquireMachinesLock for addons-677874: {Name:mka66fcfb0222f32f7cf0249f61c3bd108b86a33 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:38:36.391407  160317 start.go:364] duration metric: took 723.715µs to acquireMachinesLock for "addons-677874"
	I0818 18:38:36.391454  160317 start.go:93] Provisioning new machine with config: &{Name:addons-677874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-677874 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0818 18:38:36.391543  160317 start.go:125] createHost starting for "" (driver="docker")
	I0818 18:38:36.393447  160317 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0818 18:38:36.393698  160317 start.go:159] libmachine.API.Create for "addons-677874" (driver="docker")
	I0818 18:38:36.393735  160317 client.go:168] LocalClient.Create starting
	I0818 18:38:36.393846  160317 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem
	I0818 18:38:36.816232  160317 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/cert.pem
	I0818 18:38:37.497833  160317 cli_runner.go:164] Run: docker network inspect addons-677874 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0818 18:38:37.515573  160317 cli_runner.go:211] docker network inspect addons-677874 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0818 18:38:37.515662  160317 network_create.go:284] running [docker network inspect addons-677874] to gather additional debugging logs...
	I0818 18:38:37.515686  160317 cli_runner.go:164] Run: docker network inspect addons-677874
	W0818 18:38:37.530381  160317 cli_runner.go:211] docker network inspect addons-677874 returned with exit code 1
	I0818 18:38:37.530427  160317 network_create.go:287] error running [docker network inspect addons-677874]: docker network inspect addons-677874: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-677874 not found
	I0818 18:38:37.530443  160317 network_create.go:289] output of [docker network inspect addons-677874]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-677874 not found
	
	** /stderr **
	I0818 18:38:37.530560  160317 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0818 18:38:37.546643  160317 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400176c870}
	I0818 18:38:37.546691  160317 network_create.go:124] attempt to create docker network addons-677874 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0818 18:38:37.546755  160317 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-677874 addons-677874
	I0818 18:38:37.614305  160317 network_create.go:108] docker network addons-677874 192.168.49.0/24 created
	I0818 18:38:37.614344  160317 kic.go:121] calculated static IP "192.168.49.2" for the "addons-677874" container
	I0818 18:38:37.614421  160317 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0818 18:38:37.630383  160317 cli_runner.go:164] Run: docker volume create addons-677874 --label name.minikube.sigs.k8s.io=addons-677874 --label created_by.minikube.sigs.k8s.io=true
	I0818 18:38:37.647513  160317 oci.go:103] Successfully created a docker volume addons-677874
	I0818 18:38:37.647604  160317 cli_runner.go:164] Run: docker run --rm --name addons-677874-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-677874 --entrypoint /usr/bin/test -v addons-677874:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0818 18:38:39.333088  160317 cli_runner.go:217] Completed: docker run --rm --name addons-677874-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-677874 --entrypoint /usr/bin/test -v addons-677874:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib: (1.685425183s)
	I0818 18:38:39.333124  160317 oci.go:107] Successfully prepared a docker volume addons-677874
	I0818 18:38:39.333146  160317 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0818 18:38:39.333165  160317 kic.go:194] Starting extracting preloaded images to volume ...
	I0818 18:38:39.333255  160317 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-677874:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0818 18:38:43.411684  160317 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-677874:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (4.078385983s)
	I0818 18:38:43.411717  160317 kic.go:203] duration metric: took 4.078547895s to extract preloaded images to volume ...
	W0818 18:38:43.411873  160317 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0818 18:38:43.411984  160317 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0818 18:38:43.463757  160317 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-677874 --name addons-677874 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-677874 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-677874 --network addons-677874 --ip 192.168.49.2 --volume addons-677874:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0818 18:38:43.759931  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Running}}
	I0818 18:38:43.778508  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:38:43.806846  160317 cli_runner.go:164] Run: docker exec addons-677874 stat /var/lib/dpkg/alternatives/iptables
	I0818 18:38:43.887731  160317 oci.go:144] the created container "addons-677874" has a running status.
	I0818 18:38:43.887759  160317 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa...
	I0818 18:38:44.084580  160317 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0818 18:38:44.109478  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:38:44.136212  160317 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0818 18:38:44.136239  160317 kic_runner.go:114] Args: [docker exec --privileged addons-677874 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0818 18:38:44.211656  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:38:44.247067  160317 machine.go:93] provisionDockerMachine start ...
	I0818 18:38:44.247166  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:38:44.269273  160317 main.go:141] libmachine: Using SSH client type: native
	I0818 18:38:44.269562  160317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38317 <nil> <nil>}
	I0818 18:38:44.269586  160317 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 18:38:44.270280  160317 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0818 18:38:47.399026  160317 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-677874
	
	I0818 18:38:47.399052  160317 ubuntu.go:169] provisioning hostname "addons-677874"
	I0818 18:38:47.399120  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:38:47.416512  160317 main.go:141] libmachine: Using SSH client type: native
	I0818 18:38:47.416940  160317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38317 <nil> <nil>}
	I0818 18:38:47.416980  160317 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-677874 && echo "addons-677874" | sudo tee /etc/hostname
	I0818 18:38:47.559304  160317 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-677874
	
	I0818 18:38:47.559473  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:38:47.576056  160317 main.go:141] libmachine: Using SSH client type: native
	I0818 18:38:47.576290  160317 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38317 <nil> <nil>}
	I0818 18:38:47.576306  160317 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-677874' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-677874/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-677874' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 18:38:47.703682  160317 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:38:47.703712  160317 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-154159/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-154159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-154159/.minikube}
	I0818 18:38:47.703745  160317 ubuntu.go:177] setting up certificates
	I0818 18:38:47.703755  160317 provision.go:84] configureAuth start
	I0818 18:38:47.703851  160317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-677874
	I0818 18:38:47.720704  160317 provision.go:143] copyHostCerts
	I0818 18:38:47.720792  160317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-154159/.minikube/ca.pem (1082 bytes)
	I0818 18:38:47.720916  160317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-154159/.minikube/cert.pem (1123 bytes)
	I0818 18:38:47.720979  160317 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-154159/.minikube/key.pem (1675 bytes)
	I0818 18:38:47.721039  160317 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-154159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca-key.pem org=jenkins.addons-677874 san=[127.0.0.1 192.168.49.2 addons-677874 localhost minikube]
	I0818 18:38:48.637767  160317 provision.go:177] copyRemoteCerts
	I0818 18:38:48.637836  160317 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 18:38:48.637880  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:38:48.655065  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:38:48.748596  160317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 18:38:48.773637  160317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0818 18:38:48.797739  160317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 18:38:48.821055  160317 provision.go:87] duration metric: took 1.117285178s to configureAuth
	I0818 18:38:48.821089  160317 ubuntu.go:193] setting minikube options for container-runtime
	I0818 18:38:48.821279  160317 config.go:182] Loaded profile config "addons-677874": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0818 18:38:48.821291  160317 machine.go:96] duration metric: took 4.574199661s to provisionDockerMachine
	I0818 18:38:48.821299  160317 client.go:171] duration metric: took 12.427552595s to LocalClient.Create
	I0818 18:38:48.821311  160317 start.go:167] duration metric: took 12.427617243s to libmachine.API.Create "addons-677874"
	I0818 18:38:48.821323  160317 start.go:293] postStartSetup for "addons-677874" (driver="docker")
	I0818 18:38:48.821333  160317 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 18:38:48.821388  160317 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 18:38:48.821441  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:38:48.837339  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:38:48.933233  160317 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 18:38:48.936570  160317 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0818 18:38:48.936625  160317 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0818 18:38:48.936638  160317 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0818 18:38:48.936648  160317 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0818 18:38:48.936667  160317 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-154159/.minikube/addons for local assets ...
	I0818 18:38:48.936757  160317 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-154159/.minikube/files for local assets ...
	I0818 18:38:48.936782  160317 start.go:296] duration metric: took 115.452884ms for postStartSetup
	I0818 18:38:48.937171  160317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-677874
	I0818 18:38:48.953828  160317 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/config.json ...
	I0818 18:38:48.954126  160317 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 18:38:48.954179  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:38:48.972381  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:38:49.060770  160317 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0818 18:38:49.065374  160317 start.go:128] duration metric: took 12.673813839s to createHost
	I0818 18:38:49.065448  160317 start.go:83] releasing machines lock for "addons-677874", held for 12.674017087s
	I0818 18:38:49.065551  160317 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-677874
	I0818 18:38:49.081948  160317 ssh_runner.go:195] Run: cat /version.json
	I0818 18:38:49.081991  160317 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 18:38:49.082000  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:38:49.082076  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:38:49.103469  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:38:49.121628  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:38:49.329164  160317 ssh_runner.go:195] Run: systemctl --version
	I0818 18:38:49.333795  160317 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 18:38:49.338091  160317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0818 18:38:49.364232  160317 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0818 18:38:49.364349  160317 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 18:38:49.393173  160317 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0818 18:38:49.393246  160317 start.go:495] detecting cgroup driver to use...
	I0818 18:38:49.393295  160317 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0818 18:38:49.393377  160317 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 18:38:49.405833  160317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 18:38:49.417678  160317 docker.go:217] disabling cri-docker service (if available) ...
	I0818 18:38:49.417767  160317 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 18:38:49.432287  160317 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 18:38:49.447478  160317 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 18:38:49.536926  160317 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 18:38:49.627621  160317 docker.go:233] disabling docker service ...
	I0818 18:38:49.627694  160317 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 18:38:49.647750  160317 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 18:38:49.658744  160317 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 18:38:49.747262  160317 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 18:38:49.835368  160317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 18:38:49.846779  160317 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 18:38:49.863908  160317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 18:38:49.873954  160317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 18:38:49.884149  160317 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 18:38:49.884254  160317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 18:38:49.894485  160317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 18:38:49.904576  160317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 18:38:49.914380  160317 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 18:38:49.924207  160317 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 18:38:49.933774  160317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 18:38:49.943560  160317 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 18:38:49.953673  160317 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 18:38:49.964529  160317 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 18:38:49.974365  160317 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 18:38:49.982576  160317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:38:50.078564  160317 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 18:38:50.213173  160317 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0818 18:38:50.213318  160317 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0818 18:38:50.217329  160317 start.go:563] Will wait 60s for crictl version
	I0818 18:38:50.217462  160317 ssh_runner.go:195] Run: which crictl
	I0818 18:38:50.221004  160317 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 18:38:50.263450  160317 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0818 18:38:50.263578  160317 ssh_runner.go:195] Run: containerd --version
	I0818 18:38:50.285770  160317 ssh_runner.go:195] Run: containerd --version
	I0818 18:38:50.312476  160317 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0818 18:38:50.314445  160317 cli_runner.go:164] Run: docker network inspect addons-677874 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0818 18:38:50.330358  160317 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0818 18:38:50.334065  160317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:38:50.345342  160317 kubeadm.go:883] updating cluster {Name:addons-677874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-677874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 18:38:50.345473  160317 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0818 18:38:50.345539  160317 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 18:38:50.384061  160317 containerd.go:627] all images are preloaded for containerd runtime.
	I0818 18:38:50.384087  160317 containerd.go:534] Images already preloaded, skipping extraction
	I0818 18:38:50.384154  160317 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 18:38:50.419431  160317 containerd.go:627] all images are preloaded for containerd runtime.
	I0818 18:38:50.419468  160317 cache_images.go:84] Images are preloaded, skipping loading
	I0818 18:38:50.419486  160317 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0818 18:38:50.419590  160317 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-677874 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-677874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 18:38:50.419659  160317 ssh_runner.go:195] Run: sudo crictl info
	I0818 18:38:50.456332  160317 cni.go:84] Creating CNI manager for ""
	I0818 18:38:50.456359  160317 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0818 18:38:50.456371  160317 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 18:38:50.456394  160317 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-677874 NodeName:addons-677874 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 18:38:50.456525  160317 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-677874"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 18:38:50.456604  160317 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 18:38:50.465415  160317 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 18:38:50.465492  160317 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 18:38:50.474241  160317 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0818 18:38:50.492160  160317 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 18:38:50.510148  160317 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0818 18:38:50.528354  160317 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0818 18:38:50.531875  160317 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 18:38:50.542098  160317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:38:50.619662  160317 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:38:50.640335  160317 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874 for IP: 192.168.49.2
	I0818 18:38:50.640408  160317 certs.go:194] generating shared ca certs ...
	I0818 18:38:50.640438  160317 certs.go:226] acquiring lock for ca certs: {Name:mk31d70c02908ccfff2b137754f7d1e0b3715b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:50.641235  160317 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-154159/.minikube/ca.key
	I0818 18:38:50.852218  160317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-154159/.minikube/ca.crt ...
	I0818 18:38:50.852258  160317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/ca.crt: {Name:mkb0e3e748fd7169298ddf2cb90f3393b3233a9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:50.852459  160317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-154159/.minikube/ca.key ...
	I0818 18:38:50.852467  160317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/ca.key: {Name:mkaf21815d9739aedcb53f8f3409c6ec2e961b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:50.853265  160317 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-154159/.minikube/proxy-client-ca.key
	I0818 18:38:51.303032  160317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-154159/.minikube/proxy-client-ca.crt ...
	I0818 18:38:51.303066  160317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/proxy-client-ca.crt: {Name:mk07f61326f2846f311c7f6bfa5849fadfa7aec3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:51.303260  160317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-154159/.minikube/proxy-client-ca.key ...
	I0818 18:38:51.303274  160317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/proxy-client-ca.key: {Name:mk31a417d62385079dd6fd6b41f9213b51ca52f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:51.303360  160317 certs.go:256] generating profile certs ...
	I0818 18:38:51.303424  160317 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.key
	I0818 18:38:51.303448  160317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt with IP's: []
	I0818 18:38:52.531956  160317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt ...
	I0818 18:38:52.532001  160317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: {Name:mk18231c5403a2c7bad817cc4b02b7fb3e762ba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:52.532224  160317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.key ...
	I0818 18:38:52.532240  160317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.key: {Name:mkf7e08c106aa253730b306ccdc1fb54392a7094 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:52.532329  160317 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/apiserver.key.f1a67cfa
	I0818 18:38:52.532351  160317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/apiserver.crt.f1a67cfa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0818 18:38:53.112216  160317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/apiserver.crt.f1a67cfa ...
	I0818 18:38:53.112248  160317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/apiserver.crt.f1a67cfa: {Name:mk8e432ba3a5ebbdf1533cb519afaf62ce11befb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:53.113135  160317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/apiserver.key.f1a67cfa ...
	I0818 18:38:53.113152  160317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/apiserver.key.f1a67cfa: {Name:mkcfceb6f8f01e69601f1ba96db374656abc6725 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:53.113248  160317 certs.go:381] copying /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/apiserver.crt.f1a67cfa -> /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/apiserver.crt
	I0818 18:38:53.113338  160317 certs.go:385] copying /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/apiserver.key.f1a67cfa -> /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/apiserver.key
	I0818 18:38:53.113395  160317 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/proxy-client.key
	I0818 18:38:53.113415  160317 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/proxy-client.crt with IP's: []
	I0818 18:38:53.833249  160317 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/proxy-client.crt ...
	I0818 18:38:53.833284  160317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/proxy-client.crt: {Name:mk69a194687a80161761122eab687e80cd9b3d53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:53.834058  160317 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/proxy-client.key ...
	I0818 18:38:53.834076  160317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/proxy-client.key: {Name:mk3c4fb45698608ae15cecb023234cef70eda35d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:53.834756  160317 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 18:38:53.834803  160317 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem (1082 bytes)
	I0818 18:38:53.834834  160317 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/cert.pem (1123 bytes)
	I0818 18:38:53.834866  160317 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/key.pem (1675 bytes)
	I0818 18:38:53.835515  160317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 18:38:53.864762  160317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 18:38:53.894740  160317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 18:38:53.919167  160317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 18:38:53.943461  160317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0818 18:38:53.967483  160317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 18:38:53.990931  160317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 18:38:54.021073  160317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 18:38:54.047498  160317 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 18:38:54.073210  160317 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 18:38:54.092688  160317 ssh_runner.go:195] Run: openssl version
	I0818 18:38:54.098561  160317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 18:38:54.108523  160317 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:38:54.112431  160317 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:38:54.112533  160317 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:38:54.119961  160317 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 18:38:54.130102  160317 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 18:38:54.133490  160317 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0818 18:38:54.133540  160317 kubeadm.go:392] StartCluster: {Name:addons-677874 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-677874 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:38:54.133633  160317 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0818 18:38:54.133693  160317 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 18:38:54.174012  160317 cri.go:89] found id: ""
	I0818 18:38:54.174085  160317 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 18:38:54.183196  160317 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 18:38:54.191952  160317 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0818 18:38:54.192051  160317 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 18:38:54.201151  160317 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0818 18:38:54.201174  160317 kubeadm.go:157] found existing configuration files:
	
	I0818 18:38:54.201255  160317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0818 18:38:54.210514  160317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0818 18:38:54.210585  160317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0818 18:38:54.219262  160317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0818 18:38:54.228403  160317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0818 18:38:54.228499  160317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0818 18:38:54.236958  160317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0818 18:38:54.245653  160317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0818 18:38:54.245723  160317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 18:38:54.254040  160317 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0818 18:38:54.262568  160317 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0818 18:38:54.262657  160317 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 18:38:54.271269  160317 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0818 18:38:54.315395  160317 kubeadm.go:310] W0818 18:38:54.314705    1024 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 18:38:54.316600  160317 kubeadm.go:310] W0818 18:38:54.316114    1024 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0818 18:38:54.342732  160317 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0818 18:38:54.402953  160317 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0818 18:39:08.068869  160317 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0818 18:39:08.068929  160317 kubeadm.go:310] [preflight] Running pre-flight checks
	I0818 18:39:08.069015  160317 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0818 18:39:08.069072  160317 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0818 18:39:08.069113  160317 kubeadm.go:310] OS: Linux
	I0818 18:39:08.069162  160317 kubeadm.go:310] CGROUPS_CPU: enabled
	I0818 18:39:08.069212  160317 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0818 18:39:08.069262  160317 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0818 18:39:08.069312  160317 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0818 18:39:08.069371  160317 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0818 18:39:08.069422  160317 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0818 18:39:08.069472  160317 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0818 18:39:08.069524  160317 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0818 18:39:08.069573  160317 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0818 18:39:08.069646  160317 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0818 18:39:08.069742  160317 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0818 18:39:08.069832  160317 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0818 18:39:08.069896  160317 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0818 18:39:08.072984  160317 out.go:235]   - Generating certificates and keys ...
	I0818 18:39:08.073099  160317 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0818 18:39:08.073167  160317 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0818 18:39:08.073240  160317 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0818 18:39:08.073302  160317 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0818 18:39:08.073368  160317 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0818 18:39:08.073422  160317 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0818 18:39:08.073477  160317 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0818 18:39:08.073606  160317 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-677874 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0818 18:39:08.073664  160317 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0818 18:39:08.073777  160317 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-677874 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0818 18:39:08.073841  160317 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0818 18:39:08.073902  160317 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0818 18:39:08.073945  160317 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0818 18:39:08.073999  160317 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0818 18:39:08.074050  160317 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0818 18:39:08.074105  160317 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0818 18:39:08.074157  160317 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0818 18:39:08.074220  160317 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0818 18:39:08.074273  160317 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0818 18:39:08.074352  160317 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0818 18:39:08.074416  160317 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0818 18:39:08.076007  160317 out.go:235]   - Booting up control plane ...
	I0818 18:39:08.076184  160317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0818 18:39:08.076298  160317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0818 18:39:08.076421  160317 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0818 18:39:08.076558  160317 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0818 18:39:08.076659  160317 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0818 18:39:08.076716  160317 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0818 18:39:08.076863  160317 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0818 18:39:08.076979  160317 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0818 18:39:08.077044  160317 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001593553s
	I0818 18:39:08.077126  160317 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0818 18:39:08.077192  160317 kubeadm.go:310] [api-check] The API server is healthy after 5.502063411s
	I0818 18:39:08.077306  160317 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0818 18:39:08.077439  160317 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0818 18:39:08.077506  160317 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0818 18:39:08.077697  160317 kubeadm.go:310] [mark-control-plane] Marking the node addons-677874 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0818 18:39:08.077769  160317 kubeadm.go:310] [bootstrap-token] Using token: iz38uv.d5pe0jm14cziae3o
	I0818 18:39:08.079684  160317 out.go:235]   - Configuring RBAC rules ...
	I0818 18:39:08.079810  160317 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0818 18:39:08.079961  160317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0818 18:39:08.080104  160317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0818 18:39:08.080243  160317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0818 18:39:08.080358  160317 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0818 18:39:08.080453  160317 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0818 18:39:08.080574  160317 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0818 18:39:08.080621  160317 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0818 18:39:08.080673  160317 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0818 18:39:08.080681  160317 kubeadm.go:310] 
	I0818 18:39:08.080740  160317 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0818 18:39:08.080747  160317 kubeadm.go:310] 
	I0818 18:39:08.080821  160317 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0818 18:39:08.080828  160317 kubeadm.go:310] 
	I0818 18:39:08.080852  160317 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0818 18:39:08.080912  160317 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0818 18:39:08.080963  160317 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0818 18:39:08.080971  160317 kubeadm.go:310] 
	I0818 18:39:08.081023  160317 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0818 18:39:08.081030  160317 kubeadm.go:310] 
	I0818 18:39:08.081076  160317 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0818 18:39:08.081083  160317 kubeadm.go:310] 
	I0818 18:39:08.081136  160317 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0818 18:39:08.081211  160317 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0818 18:39:08.081281  160317 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0818 18:39:08.081288  160317 kubeadm.go:310] 
	I0818 18:39:08.081369  160317 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0818 18:39:08.081446  160317 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0818 18:39:08.081454  160317 kubeadm.go:310] 
	I0818 18:39:08.081535  160317 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token iz38uv.d5pe0jm14cziae3o \
	I0818 18:39:08.081637  160317 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cb529aca1171cc5c104017415e10673846c9a8c9551cd89ab9bb22778a41ec9d \
	I0818 18:39:08.081660  160317 kubeadm.go:310] 	--control-plane 
	I0818 18:39:08.081668  160317 kubeadm.go:310] 
	I0818 18:39:08.081750  160317 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0818 18:39:08.081762  160317 kubeadm.go:310] 
	I0818 18:39:08.081842  160317 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token iz38uv.d5pe0jm14cziae3o \
	I0818 18:39:08.081957  160317 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:cb529aca1171cc5c104017415e10673846c9a8c9551cd89ab9bb22778a41ec9d 
	I0818 18:39:08.081970  160317 cni.go:84] Creating CNI manager for ""
	I0818 18:39:08.081977  160317 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0818 18:39:08.083965  160317 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0818 18:39:08.085978  160317 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0818 18:39:08.090147  160317 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0818 18:39:08.090168  160317 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0818 18:39:08.109073  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0818 18:39:08.394152  160317 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 18:39:08.394282  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:39:08.394358  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-677874 minikube.k8s.io/updated_at=2024_08_18T18_39_08_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5 minikube.k8s.io/name=addons-677874 minikube.k8s.io/primary=true
	I0818 18:39:08.621976  160317 ops.go:34] apiserver oom_adj: -16
	I0818 18:39:08.622115  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:39:09.122156  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:39:09.622710  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:39:10.122822  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:39:10.622210  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:39:11.122260  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:39:11.622703  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:39:12.122746  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:39:12.623112  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:39:13.122333  160317 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0818 18:39:13.225052  160317 kubeadm.go:1113] duration metric: took 4.830814567s to wait for elevateKubeSystemPrivileges
	I0818 18:39:13.225088  160317 kubeadm.go:394] duration metric: took 19.091552153s to StartCluster
	I0818 18:39:13.225108  160317 settings.go:142] acquiring lock: {Name:mk0e4cdfcdf22fb3f19678cc9275d3e9545c0e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:13.225841  160317 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-154159/kubeconfig
	I0818 18:39:13.226282  160317 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/kubeconfig: {Name:mk1cf742f712a0c2eee94d91acebc845c12c0cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:39:13.227133  160317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0818 18:39:13.227169  160317 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0818 18:39:13.227423  160317 config.go:182] Loaded profile config "addons-677874": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0818 18:39:13.227459  160317 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0818 18:39:13.227551  160317 addons.go:69] Setting yakd=true in profile "addons-677874"
	I0818 18:39:13.227572  160317 addons.go:234] Setting addon yakd=true in "addons-677874"
	I0818 18:39:13.227596  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.227618  160317 addons.go:69] Setting inspektor-gadget=true in profile "addons-677874"
	I0818 18:39:13.227643  160317 addons.go:234] Setting addon inspektor-gadget=true in "addons-677874"
	I0818 18:39:13.227678  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.228079  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.228220  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.228479  160317 addons.go:69] Setting metrics-server=true in profile "addons-677874"
	I0818 18:39:13.228504  160317 addons.go:234] Setting addon metrics-server=true in "addons-677874"
	I0818 18:39:13.228531  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.228931  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.230806  160317 addons.go:69] Setting cloud-spanner=true in profile "addons-677874"
	I0818 18:39:13.230846  160317 addons.go:234] Setting addon cloud-spanner=true in "addons-677874"
	I0818 18:39:13.230881  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.231441  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.235127  160317 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-677874"
	I0818 18:39:13.235224  160317 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-677874"
	I0818 18:39:13.235378  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.239574  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.243348  160317 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-677874"
	I0818 18:39:13.243420  160317 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-677874"
	I0818 18:39:13.243455  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.244019  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.254396  160317 addons.go:69] Setting registry=true in profile "addons-677874"
	I0818 18:39:13.254445  160317 addons.go:234] Setting addon registry=true in "addons-677874"
	I0818 18:39:13.254482  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.254943  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.255087  160317 addons.go:69] Setting default-storageclass=true in profile "addons-677874"
	I0818 18:39:13.255115  160317 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-677874"
	I0818 18:39:13.255365  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.268008  160317 addons.go:69] Setting gcp-auth=true in profile "addons-677874"
	I0818 18:39:13.268051  160317 mustload.go:65] Loading cluster: addons-677874
	I0818 18:39:13.268237  160317 config.go:182] Loaded profile config "addons-677874": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0818 18:39:13.268491  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.272804  160317 addons.go:69] Setting storage-provisioner=true in profile "addons-677874"
	I0818 18:39:13.272931  160317 addons.go:234] Setting addon storage-provisioner=true in "addons-677874"
	I0818 18:39:13.273066  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.274667  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.283117  160317 addons.go:69] Setting ingress=true in profile "addons-677874"
	I0818 18:39:13.283222  160317 addons.go:234] Setting addon ingress=true in "addons-677874"
	I0818 18:39:13.283366  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.283892  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.289689  160317 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-677874"
	I0818 18:39:13.289744  160317 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-677874"
	I0818 18:39:13.290710  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.299087  160317 addons.go:69] Setting ingress-dns=true in profile "addons-677874"
	I0818 18:39:13.299151  160317 addons.go:234] Setting addon ingress-dns=true in "addons-677874"
	I0818 18:39:13.299230  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.299931  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.303852  160317 addons.go:69] Setting volcano=true in profile "addons-677874"
	I0818 18:39:13.303888  160317 addons.go:234] Setting addon volcano=true in "addons-677874"
	I0818 18:39:13.303924  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.304362  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.312451  160317 out.go:177] * Verifying Kubernetes components...
	I0818 18:39:13.321313  160317 addons.go:69] Setting volumesnapshots=true in profile "addons-677874"
	I0818 18:39:13.321658  160317 addons.go:234] Setting addon volumesnapshots=true in "addons-677874"
	I0818 18:39:13.321969  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.335596  160317 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0818 18:39:13.339725  160317 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:39:13.340321  160317 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0818 18:39:13.342178  160317 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 18:39:13.342208  160317 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 18:39:13.342338  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.342815  160317 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0818 18:39:13.342846  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0818 18:39:13.342923  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.366579  160317 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 18:39:13.369568  160317 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 18:39:13.369592  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 18:39:13.369660  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.390707  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.393986  160317 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0818 18:39:13.426198  160317 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0818 18:39:13.426227  160317 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0818 18:39:13.426293  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.426479  160317 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0818 18:39:13.449994  160317 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0818 18:39:13.451021  160317 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0818 18:39:13.451119  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.460331  160317 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0818 18:39:13.462224  160317 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0818 18:39:13.462242  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0818 18:39:13.462312  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.474348  160317 addons.go:234] Setting addon default-storageclass=true in "addons-677874"
	I0818 18:39:13.474395  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.479326  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.497581  160317 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0818 18:39:13.499262  160317 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0818 18:39:13.501143  160317 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0818 18:39:13.501162  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0818 18:39:13.501227  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.510499  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.513371  160317 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0818 18:39:13.513469  160317 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0818 18:39:13.516333  160317 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0818 18:39:13.516380  160317 out.go:177]   - Using image docker.io/registry:2.8.3
	I0818 18:39:13.517986  160317 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0818 18:39:13.518016  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0818 18:39:13.518085  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.526111  160317 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0818 18:39:13.526144  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0818 18:39:13.526214  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.531095  160317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0818 18:39:13.531326  160317 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0818 18:39:13.531493  160317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0818 18:39:13.533796  160317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0818 18:39:13.533886  160317 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0818 18:39:13.533953  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.536053  160317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0818 18:39:13.536115  160317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0818 18:39:13.538315  160317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0818 18:39:13.540049  160317 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0818 18:39:13.541839  160317 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0818 18:39:13.541881  160317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0818 18:39:13.543594  160317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0818 18:39:13.543889  160317 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0818 18:39:13.543928  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0818 18:39:13.543991  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.547378  160317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0818 18:39:13.548889  160317 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0818 18:39:13.550921  160317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0818 18:39:13.550941  160317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0818 18:39:13.551010  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.563973  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.599585  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.619472  160317 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-677874"
	I0818 18:39:13.619513  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:13.619955  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:13.632156  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.650644  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.685935  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.720274  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.730138  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.749288  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.749962  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.750495  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.754418  160317 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 18:39:13.754438  160317 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 18:39:13.754500  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.754658  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.756447  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.791525  160317 out.go:177]   - Using image docker.io/busybox:stable
	I0818 18:39:13.793244  160317 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0818 18:39:13.795125  160317 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0818 18:39:13.795151  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0818 18:39:13.795218  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:13.807154  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	W0818 18:39:13.812231  160317 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0818 18:39:13.812271  160317 retry.go:31] will retry after 239.756291ms: ssh: handshake failed: EOF
	I0818 18:39:13.830506  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:13.969589  160317 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0818 18:39:13.969725  160317 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:39:14.186405  160317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 18:39:14.186426  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0818 18:39:14.370084  160317 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0818 18:39:14.370164  160317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0818 18:39:14.383915  160317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 18:39:14.383993  160317 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 18:39:14.449599  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0818 18:39:14.504277  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0818 18:39:14.520281  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0818 18:39:14.528327  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0818 18:39:14.541152  160317 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0818 18:39:14.541227  160317 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0818 18:39:14.551966  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0818 18:39:14.554178  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0818 18:39:14.554565  160317 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 18:39:14.554606  160317 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 18:39:14.561009  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 18:39:14.579838  160317 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0818 18:39:14.579913  160317 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0818 18:39:14.615658  160317 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0818 18:39:14.615734  160317 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0818 18:39:14.635700  160317 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0818 18:39:14.635777  160317 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0818 18:39:14.643992  160317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0818 18:39:14.644073  160317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0818 18:39:14.861836  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 18:39:14.911069  160317 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0818 18:39:14.911098  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0818 18:39:14.922243  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 18:39:15.023270  160317 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0818 18:39:15.023303  160317 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0818 18:39:15.053667  160317 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0818 18:39:15.053695  160317 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0818 18:39:15.131216  160317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0818 18:39:15.131246  160317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0818 18:39:15.189030  160317 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0818 18:39:15.189059  160317 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0818 18:39:15.318732  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0818 18:39:15.399372  160317 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0818 18:39:15.399398  160317 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0818 18:39:15.513511  160317 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0818 18:39:15.513542  160317 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0818 18:39:15.595525  160317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0818 18:39:15.595552  160317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0818 18:39:15.610075  160317 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0818 18:39:15.610151  160317 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0818 18:39:15.736542  160317 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0818 18:39:15.736641  160317 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0818 18:39:15.858648  160317 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0818 18:39:15.858674  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0818 18:39:16.000022  160317 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0818 18:39:16.000050  160317 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0818 18:39:16.020151  160317 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0818 18:39:16.020179  160317 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0818 18:39:16.097150  160317 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0818 18:39:16.097181  160317 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0818 18:39:16.224845  160317 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0818 18:39:16.224869  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0818 18:39:16.287286  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0818 18:39:16.395830  160317 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.426076211s)
	I0818 18:39:16.396745  160317 node_ready.go:35] waiting up to 6m0s for node "addons-677874" to be "Ready" ...
	I0818 18:39:16.400464  160317 node_ready.go:49] node "addons-677874" has status "Ready":"True"
	I0818 18:39:16.400495  160317 node_ready.go:38] duration metric: took 3.721351ms for node "addons-677874" to be "Ready" ...
	I0818 18:39:16.400505  160317 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:39:16.413296  160317 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-b9x4z" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:16.419516  160317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0818 18:39:16.419541  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0818 18:39:16.420305  160317 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.450687618s)
	I0818 18:39:16.420331  160317 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0818 18:39:16.553485  160317 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0818 18:39:16.553513  160317 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0818 18:39:16.688606  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0818 18:39:16.923874  160317 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-677874" context rescaled to 1 replicas
	I0818 18:39:16.975971  160317 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0818 18:39:16.976038  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0818 18:39:17.050707  160317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0818 18:39:17.050786  160317 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0818 18:39:17.354464  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0818 18:39:17.453776  160317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0818 18:39:17.453838  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0818 18:39:17.714082  160317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0818 18:39:17.714148  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0818 18:39:18.072322  160317 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0818 18:39:18.072346  160317 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0818 18:39:18.355053  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.905370713s)
	I0818 18:39:18.355324  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.850974844s)
	I0818 18:39:18.355401  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.835050378s)
	I0818 18:39:18.355451  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.827057409s)
	I0818 18:39:18.424604  160317 pod_ready.go:103] pod "coredns-6f6b679f8f-b9x4z" in "kube-system" namespace has status "Ready":"False"
	I0818 18:39:18.513424  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0818 18:39:20.811399  160317 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0818 18:39:20.811550  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:20.849761  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:20.920330  160317 pod_ready.go:103] pod "coredns-6f6b679f8f-b9x4z" in "kube-system" namespace has status "Ready":"False"
	I0818 18:39:21.332151  160317 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0818 18:39:21.475476  160317 addons.go:234] Setting addon gcp-auth=true in "addons-677874"
	I0818 18:39:21.475566  160317 host.go:66] Checking if "addons-677874" exists ...
	I0818 18:39:21.476112  160317 cli_runner.go:164] Run: docker container inspect addons-677874 --format={{.State.Status}}
	I0818 18:39:21.504238  160317 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0818 18:39:21.504291  160317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-677874
	I0818 18:39:21.529097  160317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38317 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/addons-677874/id_rsa Username:docker}
	I0818 18:39:22.920868  160317 pod_ready.go:103] pod "coredns-6f6b679f8f-b9x4z" in "kube-system" namespace has status "Ready":"False"
	I0818 18:39:23.818638  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.266586786s)
	I0818 18:39:23.818830  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.264586365s)
	I0818 18:39:23.818849  160317 addons.go:475] Verifying addon ingress=true in "addons-677874"
	I0818 18:39:23.818923  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.257843854s)
	I0818 18:39:23.818959  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.957099637s)
	I0818 18:39:23.819161  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.896891437s)
	I0818 18:39:23.819182  160317 addons.go:475] Verifying addon metrics-server=true in "addons-677874"
	I0818 18:39:23.819210  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.500443264s)
	I0818 18:39:23.819225  160317 addons.go:475] Verifying addon registry=true in "addons-677874"
	I0818 18:39:23.819743  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.532423144s)
	I0818 18:39:23.820150  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.131510016s)
	W0818 18:39:23.820186  160317 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0818 18:39:23.820204  160317 retry.go:31] will retry after 364.482472ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0818 18:39:23.820268  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.465719704s)
	I0818 18:39:23.822051  160317 out.go:177] * Verifying registry addon...
	I0818 18:39:23.822064  160317 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-677874 service yakd-dashboard -n yakd-dashboard
	
	I0818 18:39:23.822206  160317 out.go:177] * Verifying ingress addon...
	I0818 18:39:23.824802  160317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0818 18:39:23.825788  160317 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0818 18:39:23.917657  160317 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0818 18:39:23.917728  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:24.030950  160317 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0818 18:39:24.031039  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:24.184837  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0818 18:39:24.329954  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:24.331048  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:24.731279  160317 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.226970703s)
	I0818 18:39:24.731380  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.217876207s)
	I0818 18:39:24.731592  160317 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-677874"
	I0818 18:39:24.733658  160317 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0818 18:39:24.733724  160317 out.go:177] * Verifying csi-hostpath-driver addon...
	I0818 18:39:24.736840  160317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0818 18:39:24.738908  160317 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0818 18:39:24.742249  160317 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0818 18:39:24.742284  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:24.743502  160317 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0818 18:39:24.743525  160317 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0818 18:39:24.807001  160317 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0818 18:39:24.807026  160317 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0818 18:39:24.828729  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:24.832201  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:24.874830  160317 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0818 18:39:24.874856  160317 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0818 18:39:24.970822  160317 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0818 18:39:25.242810  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:25.330728  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:25.332798  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:25.420363  160317 pod_ready.go:103] pod "coredns-6f6b679f8f-b9x4z" in "kube-system" namespace has status "Ready":"False"
	I0818 18:39:25.743340  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:25.830636  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:25.831154  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:25.958371  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.773437231s)
	I0818 18:39:26.011248  160317 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.040363318s)
	I0818 18:39:26.014933  160317 addons.go:475] Verifying addon gcp-auth=true in "addons-677874"
	I0818 18:39:26.018092  160317 out.go:177] * Verifying gcp-auth addon...
	I0818 18:39:26.021689  160317 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0818 18:39:26.025499  160317 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0818 18:39:26.242925  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:26.343160  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:26.343321  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:26.743248  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:26.828773  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:26.832110  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:27.242381  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:27.331955  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:27.333398  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:27.742413  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:27.843195  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:27.844636  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:27.920549  160317 pod_ready.go:103] pod "coredns-6f6b679f8f-b9x4z" in "kube-system" namespace has status "Ready":"False"
	I0818 18:39:28.243717  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:28.330500  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:28.331206  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:28.742130  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:28.829773  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:28.831562  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:29.242320  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:29.330246  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:29.332510  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:29.743032  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:29.829995  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:29.830919  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:29.922453  160317 pod_ready.go:93] pod "coredns-6f6b679f8f-b9x4z" in "kube-system" namespace has status "Ready":"True"
	I0818 18:39:29.922532  160317 pod_ready.go:82] duration metric: took 13.509197362s for pod "coredns-6f6b679f8f-b9x4z" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:29.922561  160317 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-q297d" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:29.924698  160317 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-q297d" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-q297d" not found
	I0818 18:39:29.924765  160317 pod_ready.go:82] duration metric: took 2.182131ms for pod "coredns-6f6b679f8f-q297d" in "kube-system" namespace to be "Ready" ...
	E0818 18:39:29.924791  160317 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-q297d" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-q297d" not found
	I0818 18:39:29.924813  160317 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-677874" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:29.930202  160317 pod_ready.go:93] pod "etcd-addons-677874" in "kube-system" namespace has status "Ready":"True"
	I0818 18:39:29.930230  160317 pod_ready.go:82] duration metric: took 5.384165ms for pod "etcd-addons-677874" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:29.930244  160317 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-677874" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:29.936232  160317 pod_ready.go:93] pod "kube-apiserver-addons-677874" in "kube-system" namespace has status "Ready":"True"
	I0818 18:39:29.936257  160317 pod_ready.go:82] duration metric: took 6.005037ms for pod "kube-apiserver-addons-677874" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:29.936268  160317 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-677874" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:29.941258  160317 pod_ready.go:93] pod "kube-controller-manager-addons-677874" in "kube-system" namespace has status "Ready":"True"
	I0818 18:39:29.941286  160317 pod_ready.go:82] duration metric: took 5.009462ms for pod "kube-controller-manager-addons-677874" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:29.941298  160317 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g8567" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:30.119655  160317 pod_ready.go:93] pod "kube-proxy-g8567" in "kube-system" namespace has status "Ready":"True"
	I0818 18:39:30.119689  160317 pod_ready.go:82] duration metric: took 178.383275ms for pod "kube-proxy-g8567" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:30.119704  160317 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-677874" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:30.244120  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:30.330425  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:30.331031  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:30.517614  160317 pod_ready.go:93] pod "kube-scheduler-addons-677874" in "kube-system" namespace has status "Ready":"True"
	I0818 18:39:30.517636  160317 pod_ready.go:82] duration metric: took 397.924318ms for pod "kube-scheduler-addons-677874" in "kube-system" namespace to be "Ready" ...
	I0818 18:39:30.517647  160317 pod_ready.go:39] duration metric: took 14.117127862s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:39:30.517662  160317 api_server.go:52] waiting for apiserver process to appear ...
	I0818 18:39:30.517724  160317 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:39:30.556493  160317 api_server.go:72] duration metric: took 17.329291864s to wait for apiserver process to appear ...
	I0818 18:39:30.556519  160317 api_server.go:88] waiting for apiserver healthz status ...
	I0818 18:39:30.556539  160317 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0818 18:39:30.565174  160317 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0818 18:39:30.566305  160317 api_server.go:141] control plane version: v1.31.0
	I0818 18:39:30.566338  160317 api_server.go:131] duration metric: took 9.809653ms to wait for apiserver health ...
	I0818 18:39:30.566359  160317 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 18:39:30.727240  160317 system_pods.go:59] 18 kube-system pods found
	I0818 18:39:30.727289  160317 system_pods.go:61] "coredns-6f6b679f8f-b9x4z" [0bc9db53-2f13-4b32-a6ae-d99956fd204d] Running
	I0818 18:39:30.727301  160317 system_pods.go:61] "csi-hostpath-attacher-0" [64ab3233-7caf-47b6-a96a-c5c8ed0fd7a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0818 18:39:30.727309  160317 system_pods.go:61] "csi-hostpath-resizer-0" [41fac1c4-b68f-46b6-b955-846583a645e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0818 18:39:30.727321  160317 system_pods.go:61] "csi-hostpathplugin-2xlbg" [b4502bbc-f05b-43aa-9f12-a969a2bf878d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0818 18:39:30.727327  160317 system_pods.go:61] "etcd-addons-677874" [2474791e-4db7-4de4-bee6-2580b4fe21f0] Running
	I0818 18:39:30.727334  160317 system_pods.go:61] "kindnet-nvg9b" [8f8faf09-89cb-447b-8670-6812bfedeef2] Running
	I0818 18:39:30.727339  160317 system_pods.go:61] "kube-apiserver-addons-677874" [cdca7af8-416b-4173-9555-bee3acb55f94] Running
	I0818 18:39:30.727351  160317 system_pods.go:61] "kube-controller-manager-addons-677874" [c6f277da-682b-4755-ba86-3800a5777196] Running
	I0818 18:39:30.727357  160317 system_pods.go:61] "kube-ingress-dns-minikube" [bdcb327c-21eb-4ada-8154-f647fda885e2] Running
	I0818 18:39:30.727370  160317 system_pods.go:61] "kube-proxy-g8567" [395c67fa-9074-4ad7-9710-c8f9d95cda59] Running
	I0818 18:39:30.727375  160317 system_pods.go:61] "kube-scheduler-addons-677874" [014fc8ff-ff87-43c8-a0e5-39684e2c3774] Running
	I0818 18:39:30.727380  160317 system_pods.go:61] "metrics-server-8988944d9-ssvzm" [5519d6dc-d96d-44ac-a20c-2ada1e90ed3e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 18:39:30.727390  160317 system_pods.go:61] "nvidia-device-plugin-daemonset-5z977" [454d0444-42bd-4534-be28-2b2eb61b9f4b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0818 18:39:30.727397  160317 system_pods.go:61] "registry-6fb4cdfc84-xmzmk" [f27b6e94-6969-4d20-b949-6bda33e68f47] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0818 18:39:30.727408  160317 system_pods.go:61] "registry-proxy-kdttv" [e0470018-6849-4628-b89c-359ab1e73180] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0818 18:39:30.727415  160317 system_pods.go:61] "snapshot-controller-56fcc65765-m84dz" [6494a3d7-a4e0-4cc7-920d-d4c52f3c6316] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 18:39:30.727422  160317 system_pods.go:61] "snapshot-controller-56fcc65765-zf8f4" [e89dba4d-62c5-4c41-8d67-c24edacf67ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 18:39:30.727430  160317 system_pods.go:61] "storage-provisioner" [c8a86353-90ca-414c-becb-49d2e6b8cf79] Running
	I0818 18:39:30.727437  160317 system_pods.go:74] duration metric: took 161.063285ms to wait for pod list to return data ...
	I0818 18:39:30.727450  160317 default_sa.go:34] waiting for default service account to be created ...
	I0818 18:39:30.741960  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:30.845434  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:30.846418  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:30.921731  160317 default_sa.go:45] found service account: "default"
	I0818 18:39:30.921761  160317 default_sa.go:55] duration metric: took 194.303539ms for default service account to be created ...
	I0818 18:39:30.921770  160317 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 18:39:31.128584  160317 system_pods.go:86] 18 kube-system pods found
	I0818 18:39:31.128623  160317 system_pods.go:89] "coredns-6f6b679f8f-b9x4z" [0bc9db53-2f13-4b32-a6ae-d99956fd204d] Running
	I0818 18:39:31.128635  160317 system_pods.go:89] "csi-hostpath-attacher-0" [64ab3233-7caf-47b6-a96a-c5c8ed0fd7a4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0818 18:39:31.128643  160317 system_pods.go:89] "csi-hostpath-resizer-0" [41fac1c4-b68f-46b6-b955-846583a645e9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0818 18:39:31.128652  160317 system_pods.go:89] "csi-hostpathplugin-2xlbg" [b4502bbc-f05b-43aa-9f12-a969a2bf878d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0818 18:39:31.128657  160317 system_pods.go:89] "etcd-addons-677874" [2474791e-4db7-4de4-bee6-2580b4fe21f0] Running
	I0818 18:39:31.128663  160317 system_pods.go:89] "kindnet-nvg9b" [8f8faf09-89cb-447b-8670-6812bfedeef2] Running
	I0818 18:39:31.128668  160317 system_pods.go:89] "kube-apiserver-addons-677874" [cdca7af8-416b-4173-9555-bee3acb55f94] Running
	I0818 18:39:31.128674  160317 system_pods.go:89] "kube-controller-manager-addons-677874" [c6f277da-682b-4755-ba86-3800a5777196] Running
	I0818 18:39:31.128686  160317 system_pods.go:89] "kube-ingress-dns-minikube" [bdcb327c-21eb-4ada-8154-f647fda885e2] Running
	I0818 18:39:31.128691  160317 system_pods.go:89] "kube-proxy-g8567" [395c67fa-9074-4ad7-9710-c8f9d95cda59] Running
	I0818 18:39:31.128696  160317 system_pods.go:89] "kube-scheduler-addons-677874" [014fc8ff-ff87-43c8-a0e5-39684e2c3774] Running
	I0818 18:39:31.128709  160317 system_pods.go:89] "metrics-server-8988944d9-ssvzm" [5519d6dc-d96d-44ac-a20c-2ada1e90ed3e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0818 18:39:31.128719  160317 system_pods.go:89] "nvidia-device-plugin-daemonset-5z977" [454d0444-42bd-4534-be28-2b2eb61b9f4b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0818 18:39:31.128730  160317 system_pods.go:89] "registry-6fb4cdfc84-xmzmk" [f27b6e94-6969-4d20-b949-6bda33e68f47] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0818 18:39:31.128736  160317 system_pods.go:89] "registry-proxy-kdttv" [e0470018-6849-4628-b89c-359ab1e73180] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0818 18:39:31.128744  160317 system_pods.go:89] "snapshot-controller-56fcc65765-m84dz" [6494a3d7-a4e0-4cc7-920d-d4c52f3c6316] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 18:39:31.128751  160317 system_pods.go:89] "snapshot-controller-56fcc65765-zf8f4" [e89dba4d-62c5-4c41-8d67-c24edacf67ed] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0818 18:39:31.128761  160317 system_pods.go:89] "storage-provisioner" [c8a86353-90ca-414c-becb-49d2e6b8cf79] Running
	I0818 18:39:31.128770  160317 system_pods.go:126] duration metric: took 206.983665ms to wait for k8s-apps to be running ...
	I0818 18:39:31.128781  160317 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 18:39:31.128839  160317 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:39:31.143209  160317 system_svc.go:56] duration metric: took 14.418025ms WaitForService to wait for kubelet
	I0818 18:39:31.143242  160317 kubeadm.go:582] duration metric: took 17.916045531s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:39:31.143263  160317 node_conditions.go:102] verifying NodePressure condition ...
	I0818 18:39:31.242036  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:31.318500  160317 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0818 18:39:31.318537  160317 node_conditions.go:123] node cpu capacity is 2
	I0818 18:39:31.318550  160317 node_conditions.go:105] duration metric: took 175.273064ms to run NodePressure ...
	I0818 18:39:31.318564  160317 start.go:241] waiting for startup goroutines ...
	I0818 18:39:31.328519  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:31.330677  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:31.742711  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:31.842452  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:31.842956  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:32.242684  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:32.328609  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:32.332546  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:32.741990  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:32.831903  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:32.832230  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:33.242330  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:33.342950  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:33.343199  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:33.743311  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:33.832611  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:33.834168  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:34.247916  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:34.331812  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:34.332636  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:34.745413  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:34.847437  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:34.847694  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:35.243749  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:35.333732  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:35.335130  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:35.759930  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:35.858599  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:35.859613  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:36.256420  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:36.354083  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:36.355448  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:36.745638  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:36.831466  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:36.832512  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:37.242813  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:37.338759  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:37.339071  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:37.742127  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:37.842682  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:37.843172  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:38.242755  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:38.330660  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:38.331380  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:38.741914  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:38.830442  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:38.831743  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:39.242179  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:39.331169  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:39.332622  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:39.742044  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:39.830668  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:39.831417  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:40.243090  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:40.329380  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:40.331467  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:40.742572  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:40.828394  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:40.830788  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:41.242524  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:41.329371  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:41.330868  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:41.741540  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:41.829185  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:41.830549  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:42.257678  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:42.334766  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:42.335840  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:42.742477  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:42.830373  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:42.830916  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:43.243023  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:43.329151  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:43.332642  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:43.743041  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:43.830711  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:43.833338  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:44.242016  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:44.331843  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:44.332596  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:44.742458  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:44.829556  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:44.830706  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:45.248338  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:45.331680  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:45.333983  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:45.743324  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:45.829855  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:45.833209  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:46.241514  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:46.330057  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:46.331018  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:46.742346  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:46.828608  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:46.830818  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:47.244905  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:47.333083  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:47.333433  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:47.741437  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:47.828980  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:47.830810  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:48.241354  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:48.329739  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:48.334924  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:48.742394  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:48.829478  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:48.830531  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:49.241271  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:49.331225  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:49.332303  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:49.742036  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:49.829724  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:49.831760  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:50.243262  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:50.343458  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0818 18:39:50.345797  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:50.745848  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:50.831553  160317 kapi.go:107] duration metric: took 27.006749679s to wait for kubernetes.io/minikube-addons=registry ...
	I0818 18:39:50.835460  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:51.244350  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:51.331967  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:51.748682  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:51.832013  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:52.244055  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:52.329724  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:52.742385  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:52.830704  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:53.241946  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:53.330850  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:53.741503  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:53.830763  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:54.241779  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:54.329570  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:54.741807  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:54.829894  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:55.241624  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:55.330498  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:55.742351  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:55.830729  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:56.242004  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:56.329481  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:56.744823  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:56.846092  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:57.242978  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:57.331647  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:57.742468  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:57.830625  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:58.244546  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:58.331580  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:58.743178  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:58.830505  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:59.245297  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:59.330616  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:39:59.742958  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:39:59.830890  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:00.265044  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:00.370272  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:00.742241  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:00.830962  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:01.242344  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:01.330486  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:01.741206  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:01.830068  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:02.242494  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:02.331303  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:02.742108  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:02.830158  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:03.241237  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:03.330583  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:03.742653  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:03.830917  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:04.244544  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:04.331013  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:04.742491  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:04.830758  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:05.242622  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:05.331115  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:05.743908  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:05.844491  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:06.242035  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:06.330422  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:06.742586  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:06.831415  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:07.242353  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:07.329837  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:07.741757  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:07.830725  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:08.242720  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:08.329975  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:08.742957  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:08.843200  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:09.242163  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:09.332041  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:09.743856  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:09.832234  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:10.243714  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:10.343967  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:10.745687  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:10.829824  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:11.242420  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:11.330763  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:11.744366  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:11.844384  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:12.241817  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:12.329883  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:12.745068  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:12.833672  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:13.240923  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:13.334098  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:13.742033  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0818 18:40:13.829794  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:14.241592  160317 kapi.go:107] duration metric: took 49.504750023s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0818 18:40:14.330211  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:14.830415  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:15.330209  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:15.830716  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:16.330387  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:16.831109  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:17.330237  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:17.830758  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:18.330383  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:18.830635  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:19.330702  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:19.830317  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:20.330640  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:20.830834  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:21.330055  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:21.829967  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:22.330770  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:22.829992  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:23.330249  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:23.830403  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:24.329768  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:24.830788  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:25.329881  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:25.830194  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:26.333376  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:26.831159  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:27.330671  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:27.830984  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:28.330521  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:28.830641  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:29.330375  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:29.831332  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:30.330782  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:30.832903  160317 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0818 18:40:31.331064  160317 kapi.go:107] duration metric: took 1m7.505271205s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0818 18:40:48.032883  160317 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0818 18:40:48.032906  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:48.525724  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:49.025871  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:49.525380  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:50.025843  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:50.525053  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:51.026072  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:51.525654  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:52.025641  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:52.525229  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:53.025183  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:53.524951  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:54.025882  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:54.525721  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:55.026839  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:55.525555  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:56.025675  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:56.525948  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:57.026365  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:57.525859  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:58.026324  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:58.526008  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:59.025633  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:40:59.525272  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:00.031989  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:00.525589  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:01.025138  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:01.526250  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:02.028284  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:02.525460  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:03.025873  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:03.525568  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:04.025768  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:04.525313  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:05.026135  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:05.525036  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:06.025904  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:06.526297  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:07.025535  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:07.525706  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:08.025989  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:08.525142  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:09.026979  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:09.525913  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:10.028113  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:10.526305  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:11.025043  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:11.526107  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:12.026176  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:12.525024  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:13.025440  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:13.525715  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:14.025327  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:14.525165  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:15.031734  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:15.525769  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:16.025179  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:16.526482  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:17.026045  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:17.526106  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:18.027599  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:18.525474  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:19.027870  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:19.525569  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:20.026172  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:20.526742  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:21.026232  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:21.524991  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:22.025416  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:22.525130  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:23.025944  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:23.525715  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:24.025103  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:24.525561  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:25.025621  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:25.525224  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:26.025459  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:26.525323  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:27.025558  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:27.525167  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:28.026178  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:28.526099  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:29.027399  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:29.527191  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:30.037804  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:30.525535  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:31.025710  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:31.526027  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:32.025159  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:32.526386  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:33.026146  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:33.525062  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:34.025954  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:34.525804  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:35.027867  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:35.525045  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:36.025641  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:36.525691  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:37.026368  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:37.524926  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:38.025499  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:38.525522  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:39.027436  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:39.525408  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:40.036861  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:40.525890  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:41.025566  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:41.526166  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:42.025619  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:42.525216  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:43.025940  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:43.525746  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:44.029283  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:44.525171  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:45.026286  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:45.525156  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:46.025479  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:46.525386  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:47.025364  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:47.526020  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:48.025614  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:48.525705  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:49.025624  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:49.525564  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:50.025698  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:50.525012  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:51.026313  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:51.525106  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:52.025952  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:52.526018  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:53.025746  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:53.525734  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:54.025818  160317 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0818 18:41:54.525776  160317 kapi.go:107] duration metric: took 2m28.504091004s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0818 18:41:54.527762  160317 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-677874 cluster.
	I0818 18:41:54.529930  160317 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0818 18:41:54.531712  160317 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0818 18:41:54.533511  160317 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner-rancher, volcano, storage-provisioner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0818 18:41:54.535134  160317 addons.go:510] duration metric: took 2m41.307662843s for enable addons: enabled=[nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner-rancher volcano storage-provisioner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0818 18:41:54.535198  160317 start.go:246] waiting for cluster config update ...
	I0818 18:41:54.535246  160317 start.go:255] writing updated cluster config ...
	I0818 18:41:54.535655  160317 ssh_runner.go:195] Run: rm -f paused
	I0818 18:41:54.883131  160317 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 18:41:54.885510  160317 out.go:177] * Done! kubectl is now configured to use "addons-677874" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	74c9e849c39cd       e2d3313f65753       2 minutes ago       Exited              gadget                                   5                   1ff6d5296b640       gadget-zxrsl
	2630a04c64b54       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   f8ccc8b1409ac       gcp-auth-89d5ffd79-z6t72
	b1d334a24ed6f       8b46b1cd48760       4 minutes ago       Running             admission                                0                   e744b0e5ab5ef       volcano-admission-77d7d48b68-xt6vs
	575e9b671a92e       289a818c8d9c5       4 minutes ago       Running             controller                               0                   fa4a9c27ccc80       ingress-nginx-controller-bc57996ff-8t8rb
	ba5997f0b4d9d       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   9e19caa652d54       csi-hostpathplugin-2xlbg
	20b9bfe71e749       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   9e19caa652d54       csi-hostpathplugin-2xlbg
	f6667bc45f656       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   9e19caa652d54       csi-hostpathplugin-2xlbg
	47a2f9ede8e55       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   9e19caa652d54       csi-hostpathplugin-2xlbg
	726c4bdd3bdb1       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   9e19caa652d54       csi-hostpathplugin-2xlbg
	368c6633bc3f5       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   77b0253501216       csi-hostpath-resizer-0
	7358df0e2788d       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   9e19caa652d54       csi-hostpathplugin-2xlbg
	da3372a4b40c2       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   5c279ac082485       volcano-controllers-56675bb4d5-dnrr5
	f7d0dd35c5a91       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   14eeb8c17247f       csi-hostpath-attacher-0
	33ddf94a97514       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   3892b3e776261       volcano-scheduler-576bc46687-gpg4k
	08dfda454f539       420193b27261a       5 minutes ago       Exited              patch                                    1                   0bab717e7cca2       ingress-nginx-admission-patch-w6zd4
	3e9b882154a46       420193b27261a       5 minutes ago       Exited              create                                   0                   b4e6d4f17a338       ingress-nginx-admission-create-m6dhw
	ef1b4c932e3dc       77bdba588b953       5 minutes ago       Running             yakd                                     0                   c6f58ad0c1e27       yakd-dashboard-67d98fc6b-vp5cf
	fcd470e330cf6       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   f533fc574b7bd       snapshot-controller-56fcc65765-zf8f4
	87dd59e0a4f65       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   38f843e982940       registry-proxy-kdttv
	fcccdbf742633       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   68918c3a9843b       snapshot-controller-56fcc65765-m84dz
	42c266f452b92       53af6e2c4c343       5 minutes ago       Running             cloud-spanner-emulator                   0                   bcb6690b335fd       cloud-spanner-emulator-c4bc9b5f8-s5ms9
	e73a529a7fdbf       6fed88f43b276       5 minutes ago       Running             registry                                 0                   7ec6deeeb0e88       registry-6fb4cdfc84-xmzmk
	f550746bafe0c       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   0c635057aeb85       nvidia-device-plugin-daemonset-5z977
	1ee14911651f2       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   ca7b779f03e93       local-path-provisioner-86d989889c-jgkl7
	053a8ad137715       95dccb4df54ab       5 minutes ago       Running             metrics-server                           0                   9beb77246d0f3       metrics-server-8988944d9-ssvzm
	5c3fa3fb2c8af       2437cf7621777       5 minutes ago       Running             coredns                                  0                   f7ee1988c74a2       coredns-6f6b679f8f-b9x4z
	f227fc7ca5861       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   d6691c6b25e15       kube-ingress-dns-minikube
	9c21e07f378cf       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   4e02b20624e25       storage-provisioner
	11b73e1314419       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   3ae536826e04f       kindnet-nvg9b
	ef56e4808a6d1       71d55d66fd4ee       5 minutes ago       Running             kube-proxy                               0                   23a5d7862a7ec       kube-proxy-g8567
	b5c356fb245ba       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   b34cf2702200a       kube-scheduler-addons-677874
	a3735d9f8018a       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   63272cb3ad491       kube-controller-manager-addons-677874
	767b896fe6db4       27e3830e14027       6 minutes ago       Running             etcd                                     0                   65d6eeddcb349       etcd-addons-677874
	5aadb625effd3       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   697e968b798f3       kube-apiserver-addons-677874
	
	
	==> containerd <==
	Aug 18 18:42:07 addons-677874 containerd[813]: time="2024-08-18T18:42:07.463212616Z" level=info msg="RemovePodSandbox \"6439015c62c940bb3a53be15a7567259a81a5f50bd09dcf124372880d43052b3\" returns successfully"
	Aug 18 18:42:52 addons-677874 containerd[813]: time="2024-08-18T18:42:52.419860133Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\""
	Aug 18 18:42:52 addons-677874 containerd[813]: time="2024-08-18T18:42:52.561006435Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 18 18:42:52 addons-677874 containerd[813]: time="2024-08-18T18:42:52.562616366Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: active requests=0, bytes read=89"
	Aug 18 18:42:52 addons-677874 containerd[813]: time="2024-08-18T18:42:52.566009824Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" with image id \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\", size \"69907666\" in 146.096751ms"
	Aug 18 18:42:52 addons-677874 containerd[813]: time="2024-08-18T18:42:52.566063223Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" returns image reference \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\""
	Aug 18 18:42:52 addons-677874 containerd[813]: time="2024-08-18T18:42:52.568274810Z" level=info msg="CreateContainer within sandbox \"1ff6d5296b6406ee71eb1ba84852f5413efae1be321980d813b8f7d44d825f4b\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 18 18:42:52 addons-677874 containerd[813]: time="2024-08-18T18:42:52.592025191Z" level=info msg="CreateContainer within sandbox \"1ff6d5296b6406ee71eb1ba84852f5413efae1be321980d813b8f7d44d825f4b\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f\""
	Aug 18 18:42:52 addons-677874 containerd[813]: time="2024-08-18T18:42:52.592780232Z" level=info msg="StartContainer for \"74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f\""
	Aug 18 18:42:52 addons-677874 containerd[813]: time="2024-08-18T18:42:52.657474998Z" level=info msg="StartContainer for \"74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f\" returns successfully"
	Aug 18 18:42:53 addons-677874 containerd[813]: time="2024-08-18T18:42:53.917389990Z" level=info msg="shim disconnected" id=74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f namespace=k8s.io
	Aug 18 18:42:53 addons-677874 containerd[813]: time="2024-08-18T18:42:53.917453703Z" level=warning msg="cleaning up after shim disconnected" id=74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f namespace=k8s.io
	Aug 18 18:42:53 addons-677874 containerd[813]: time="2024-08-18T18:42:53.917463713Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 18 18:42:54 addons-677874 containerd[813]: time="2024-08-18T18:42:54.509086838Z" level=info msg="RemoveContainer for \"f3de08099379f1cf7012191e4b7f4b5234e9cf68fdaac8ad051788570cfb624e\""
	Aug 18 18:42:54 addons-677874 containerd[813]: time="2024-08-18T18:42:54.515143893Z" level=info msg="RemoveContainer for \"f3de08099379f1cf7012191e4b7f4b5234e9cf68fdaac8ad051788570cfb624e\" returns successfully"
	Aug 18 18:43:07 addons-677874 containerd[813]: time="2024-08-18T18:43:07.467310133Z" level=info msg="RemoveContainer for \"6ac18febc55a9071efcb2fa8e0f6ca0e64a49d60e51c7a2228ee09558d5c03ea\""
	Aug 18 18:43:07 addons-677874 containerd[813]: time="2024-08-18T18:43:07.474372986Z" level=info msg="RemoveContainer for \"6ac18febc55a9071efcb2fa8e0f6ca0e64a49d60e51c7a2228ee09558d5c03ea\" returns successfully"
	Aug 18 18:43:07 addons-677874 containerd[813]: time="2024-08-18T18:43:07.476410863Z" level=info msg="StopPodSandbox for \"2c241dbce36938b8027a3133586f590b9ec986cafb18393c33de25def95b4632\""
	Aug 18 18:43:07 addons-677874 containerd[813]: time="2024-08-18T18:43:07.484028200Z" level=info msg="TearDown network for sandbox \"2c241dbce36938b8027a3133586f590b9ec986cafb18393c33de25def95b4632\" successfully"
	Aug 18 18:43:07 addons-677874 containerd[813]: time="2024-08-18T18:43:07.484070785Z" level=info msg="StopPodSandbox for \"2c241dbce36938b8027a3133586f590b9ec986cafb18393c33de25def95b4632\" returns successfully"
	Aug 18 18:43:07 addons-677874 containerd[813]: time="2024-08-18T18:43:07.484693905Z" level=info msg="RemovePodSandbox for \"2c241dbce36938b8027a3133586f590b9ec986cafb18393c33de25def95b4632\""
	Aug 18 18:43:07 addons-677874 containerd[813]: time="2024-08-18T18:43:07.484737023Z" level=info msg="Forcibly stopping sandbox \"2c241dbce36938b8027a3133586f590b9ec986cafb18393c33de25def95b4632\""
	Aug 18 18:43:07 addons-677874 containerd[813]: time="2024-08-18T18:43:07.492138402Z" level=info msg="TearDown network for sandbox \"2c241dbce36938b8027a3133586f590b9ec986cafb18393c33de25def95b4632\" successfully"
	Aug 18 18:43:07 addons-677874 containerd[813]: time="2024-08-18T18:43:07.497944775Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c241dbce36938b8027a3133586f590b9ec986cafb18393c33de25def95b4632\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 18 18:43:07 addons-677874 containerd[813]: time="2024-08-18T18:43:07.498071035Z" level=info msg="RemovePodSandbox \"2c241dbce36938b8027a3133586f590b9ec986cafb18393c33de25def95b4632\" returns successfully"
	
	
	==> coredns [5c3fa3fb2c8afb0feade438a4583eab5df31cbcf70e09fe9ea33e5250bbe4ec3] <==
	[INFO] 10.244.0.9:57742 - 611 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000157046s
	[INFO] 10.244.0.9:40839 - 31538 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00218872s
	[INFO] 10.244.0.9:40839 - 48944 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001980499s
	[INFO] 10.244.0.9:49494 - 30935 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000107011s
	[INFO] 10.244.0.9:49494 - 62922 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000133907s
	[INFO] 10.244.0.9:33426 - 55019 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000113173s
	[INFO] 10.244.0.9:33426 - 59375 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000183868s
	[INFO] 10.244.0.9:57430 - 6026 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000058461s
	[INFO] 10.244.0.9:57430 - 63989 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000036586s
	[INFO] 10.244.0.9:41493 - 6074 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000129428s
	[INFO] 10.244.0.9:41493 - 23228 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000159409s
	[INFO] 10.244.0.9:38754 - 15217 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001277051s
	[INFO] 10.244.0.9:38754 - 2931 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001413149s
	[INFO] 10.244.0.9:35928 - 26696 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000105034s
	[INFO] 10.244.0.9:35928 - 3658 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000193222s
	[INFO] 10.244.0.24:54661 - 62499 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.002206228s
	[INFO] 10.244.0.24:41725 - 35169 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.006497834s
	[INFO] 10.244.0.24:57996 - 8111 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000276307s
	[INFO] 10.244.0.24:39273 - 17883 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000311629s
	[INFO] 10.244.0.24:47223 - 29426 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000135926s
	[INFO] 10.244.0.24:43833 - 12051 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132414s
	[INFO] 10.244.0.24:50617 - 15695 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002410208s
	[INFO] 10.244.0.24:43999 - 40582 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002464788s
	[INFO] 10.244.0.24:41683 - 34082 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000768055s
	[INFO] 10.244.0.24:35126 - 64479 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000849334s
	
	
	==> describe nodes <==
	Name:               addons-677874
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-677874
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=addons-677874
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T18_39_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-677874
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-677874"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:39:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-677874
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 18:45:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 18:42:11 +0000   Sun, 18 Aug 2024 18:39:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 18:42:11 +0000   Sun, 18 Aug 2024 18:39:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 18:42:11 +0000   Sun, 18 Aug 2024 18:39:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 18:42:11 +0000   Sun, 18 Aug 2024 18:39:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-677874
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9bff41868b54773a41cd52998f8f32e
	  System UUID:                5e973ec3-18d4-4ea6-a004-0000674b3d36
	  Boot ID:                    46f0c01d-aaa8-472d-87c6-dade3bb189f7
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-c4bc9b5f8-s5ms9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  gadget                      gadget-zxrsl                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  gcp-auth                    gcp-auth-89d5ffd79-z6t72                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-8t8rb    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m52s
	  kube-system                 coredns-6f6b679f8f-b9x4z                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m1s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 csi-hostpathplugin-2xlbg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m49s
	  kube-system                 etcd-addons-677874                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m6s
	  kube-system                 kindnet-nvg9b                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m1s
	  kube-system                 kube-apiserver-addons-677874                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-controller-manager-addons-677874       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 kube-proxy-g8567                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 kube-scheduler-addons-677874                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 metrics-server-8988944d9-ssvzm              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m55s
	  kube-system                 nvidia-device-plugin-daemonset-5z977        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 registry-6fb4cdfc84-xmzmk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 registry-proxy-kdttv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 snapshot-controller-56fcc65765-m84dz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 snapshot-controller-56fcc65765-zf8f4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  local-path-storage          local-path-provisioner-86d989889c-jgkl7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  volcano-system              volcano-admission-77d7d48b68-xt6vs          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  volcano-system              volcano-controllers-56675bb4d5-dnrr5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  volcano-system              volcano-scheduler-576bc46687-gpg4k          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-vp5cf              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m59s  kube-proxy       
	  Normal   Starting                 6m6s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m6s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m6s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m6s   kubelet          Node addons-677874 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m6s   kubelet          Node addons-677874 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m6s   kubelet          Node addons-677874 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m1s   node-controller  Node addons-677874 event: Registered Node addons-677874 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [767b896fe6db4575fa3ae25404cfff2d928af8a86b45fb2d6fb0d5ab6a470035] <==
	{"level":"info","ts":"2024-08-18T18:39:01.860080Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-18T18:39:01.860385Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-18T18:39:01.863405Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-18T18:39:01.864727Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-18T18:39:01.867887Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-18T18:39:02.511854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-18T18:39:02.512061Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-18T18:39:02.512202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-18T18:39:02.512285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-18T18:39:02.512367Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-18T18:39:02.512450Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-18T18:39:02.512541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-18T18:39:02.515917Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T18:39:02.519961Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-677874 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T18:39:02.520136Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T18:39:02.520567Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T18:39:02.521561Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T18:39:02.522600Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-08-18T18:39:02.522883Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T18:39:02.523018Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T18:39:02.523295Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T18:39:02.523525Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T18:39:02.523662Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T18:39:02.527888Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T18:39:02.528859Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [2630a04c64b548cfa3415afe1f06162bab7b0cbfc9620983cdbddf97e934cca5] <==
	2024/08/18 18:41:53 GCP Auth Webhook started!
	2024/08/18 18:42:11 Ready to marshal response ...
	2024/08/18 18:42:11 Ready to write response ...
	2024/08/18 18:42:12 Ready to marshal response ...
	2024/08/18 18:42:12 Ready to write response ...
	
	
	==> kernel <==
	 18:45:13 up 1 day,  3:27,  0 users,  load average: 0.09, 0.73, 1.08
	Linux addons-677874 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [11b73e1314419614f09e0d97f15c245466952162efa9507cb6b47e6abfd9be8a] <==
	E0818 18:43:55.589495       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0818 18:43:56.233334       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0818 18:43:56.233539       1 main.go:299] handling current node
	I0818 18:44:06.234265       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0818 18:44:06.234299       1 main.go:299] handling current node
	I0818 18:44:16.234015       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0818 18:44:16.234051       1 main.go:299] handling current node
	I0818 18:44:26.233392       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0818 18:44:26.233428       1 main.go:299] handling current node
	W0818 18:44:31.021072       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0818 18:44:31.021109       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0818 18:44:36.233858       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0818 18:44:36.233909       1 main.go:299] handling current node
	W0818 18:44:36.399726       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 18:44:36.399765       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0818 18:44:46.233933       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0818 18:44:46.233966       1 main.go:299] handling current node
	W0818 18:44:49.448859       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0818 18:44:49.448990       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0818 18:44:56.234202       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0818 18:44:56.234239       1 main.go:299] handling current node
	W0818 18:45:03.662152       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0818 18:45:03.662187       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0818 18:45:06.233574       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0818 18:45:06.233609       1 main.go:299] handling current node
	
	
	==> kube-apiserver [5aadb625effd3d18b7593c7fbaacc4184be814d96c776705696974eecc6f2dd8] <==
	W0818 18:40:27.163076       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:28.194550       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:29.014556       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.238.94:443: connect: connection refused
	E0818 18:40:29.014605       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.238.94:443: connect: connection refused" logger="UnhandledError"
	W0818 18:40:29.016372       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:29.048539       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.238.94:443: connect: connection refused
	E0818 18:40:29.048580       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.238.94:443: connect: connection refused" logger="UnhandledError"
	W0818 18:40:29.050144       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:29.231904       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:30.246246       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:31.340059       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:32.432971       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:33.528134       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:34.582737       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:35.642929       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:36.721916       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:37.749138       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.69.144:443: connect: connection refused
	W0818 18:40:47.957557       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.238.94:443: connect: connection refused
	E0818 18:40:47.957603       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.238.94:443: connect: connection refused" logger="UnhandledError"
	W0818 18:41:29.027476       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.238.94:443: connect: connection refused
	E0818 18:41:29.027517       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.238.94:443: connect: connection refused" logger="UnhandledError"
	W0818 18:41:29.060852       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.238.94:443: connect: connection refused
	E0818 18:41:29.060888       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.238.94:443: connect: connection refused" logger="UnhandledError"
	I0818 18:42:11.451330       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0818 18:42:11.521088       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [a3735d9f8018a96bb4dfea36ed865e346e3f07908a18f777035d54788c32cf24] <==
	I0818 18:41:29.050404       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:29.059846       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:29.079904       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:29.080109       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:29.095359       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:29.097151       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:29.109348       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:30.282714       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:30.297690       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:31.407701       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:31.431397       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:32.417960       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:32.428328       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:32.435231       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0818 18:41:32.439922       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:32.448725       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:32.455303       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0818 18:41:54.372687       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="11.651163ms"
	I0818 18:41:54.373068       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="49.338µs"
	I0818 18:42:02.026281       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0818 18:42:02.029736       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0818 18:42:02.074117       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0818 18:42:02.076092       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0818 18:42:11.167407       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I0818 18:42:11.513663       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-677874"
	
	
	==> kube-proxy [ef56e4808a6d1ae78d40df2c955f6a1b95271ac2e17fbcc84259d7a98c26d95c] <==
	I0818 18:39:14.163370       1 server_linux.go:66] "Using iptables proxy"
	I0818 18:39:14.298882       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0818 18:39:14.298957       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 18:39:14.331026       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0818 18:39:14.331094       1 server_linux.go:169] "Using iptables Proxier"
	I0818 18:39:14.334584       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 18:39:14.335060       1 server.go:483] "Version info" version="v1.31.0"
	I0818 18:39:14.335075       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:39:14.356071       1 config.go:326] "Starting node config controller"
	I0818 18:39:14.356098       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 18:39:14.357499       1 config.go:197] "Starting service config controller"
	I0818 18:39:14.357513       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 18:39:14.357531       1 config.go:104] "Starting endpoint slice config controller"
	I0818 18:39:14.357536       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 18:39:14.459963       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0818 18:39:14.460031       1 shared_informer.go:320] Caches are synced for node config
	I0818 18:39:14.460043       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [b5c356fb245baa225c535851515c4b1b0ed95b4cf69f1e91d362e33b2c4c6a27] <==
	W0818 18:39:05.749378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0818 18:39:05.749403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:39:05.749524       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0818 18:39:05.749550       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:39:05.749721       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 18:39:05.749744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:39:05.749885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0818 18:39:05.749987       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:39:05.750131       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0818 18:39:05.750221       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0818 18:39:05.750454       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 18:39:05.750558       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0818 18:39:05.750813       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0818 18:39:05.750925       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0818 18:39:05.751942       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 18:39:05.752093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:39:05.752207       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 18:39:05.752257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:39:05.752335       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 18:39:05.752370       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0818 18:39:05.752508       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 18:39:05.752636       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0818 18:39:05.753051       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 18:39:05.753079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0818 18:39:07.044456       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 18 18:43:09 addons-677874 kubelet[1469]: E0818 18:43:09.419876    1469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zxrsl_gadget(744d378f-5523-4ff3-ab7c-e5307c9971ed)\"" pod="gadget/gadget-zxrsl" podUID="744d378f-5523-4ff3-ab7c-e5307c9971ed"
	Aug 18 18:43:21 addons-677874 kubelet[1469]: I0818 18:43:21.418835    1469 scope.go:117] "RemoveContainer" containerID="74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f"
	Aug 18 18:43:21 addons-677874 kubelet[1469]: E0818 18:43:21.419040    1469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zxrsl_gadget(744d378f-5523-4ff3-ab7c-e5307c9971ed)\"" pod="gadget/gadget-zxrsl" podUID="744d378f-5523-4ff3-ab7c-e5307c9971ed"
	Aug 18 18:43:22 addons-677874 kubelet[1469]: I0818 18:43:22.418950    1469 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-xmzmk" secret="" err="secret \"gcp-auth\" not found"
	Aug 18 18:43:29 addons-677874 kubelet[1469]: I0818 18:43:29.418669    1469 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5z977" secret="" err="secret \"gcp-auth\" not found"
	Aug 18 18:43:36 addons-677874 kubelet[1469]: I0818 18:43:36.419060    1469 scope.go:117] "RemoveContainer" containerID="74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f"
	Aug 18 18:43:36 addons-677874 kubelet[1469]: E0818 18:43:36.419269    1469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zxrsl_gadget(744d378f-5523-4ff3-ab7c-e5307c9971ed)\"" pod="gadget/gadget-zxrsl" podUID="744d378f-5523-4ff3-ab7c-e5307c9971ed"
	Aug 18 18:43:42 addons-677874 kubelet[1469]: I0818 18:43:42.418878    1469 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kdttv" secret="" err="secret \"gcp-auth\" not found"
	Aug 18 18:43:50 addons-677874 kubelet[1469]: I0818 18:43:50.419332    1469 scope.go:117] "RemoveContainer" containerID="74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f"
	Aug 18 18:43:50 addons-677874 kubelet[1469]: E0818 18:43:50.419584    1469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zxrsl_gadget(744d378f-5523-4ff3-ab7c-e5307c9971ed)\"" pod="gadget/gadget-zxrsl" podUID="744d378f-5523-4ff3-ab7c-e5307c9971ed"
	Aug 18 18:44:02 addons-677874 kubelet[1469]: I0818 18:44:02.418525    1469 scope.go:117] "RemoveContainer" containerID="74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f"
	Aug 18 18:44:02 addons-677874 kubelet[1469]: E0818 18:44:02.418745    1469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zxrsl_gadget(744d378f-5523-4ff3-ab7c-e5307c9971ed)\"" pod="gadget/gadget-zxrsl" podUID="744d378f-5523-4ff3-ab7c-e5307c9971ed"
	Aug 18 18:44:17 addons-677874 kubelet[1469]: I0818 18:44:17.419952    1469 scope.go:117] "RemoveContainer" containerID="74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f"
	Aug 18 18:44:17 addons-677874 kubelet[1469]: E0818 18:44:17.420608    1469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zxrsl_gadget(744d378f-5523-4ff3-ab7c-e5307c9971ed)\"" pod="gadget/gadget-zxrsl" podUID="744d378f-5523-4ff3-ab7c-e5307c9971ed"
	Aug 18 18:44:28 addons-677874 kubelet[1469]: I0818 18:44:28.419110    1469 scope.go:117] "RemoveContainer" containerID="74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f"
	Aug 18 18:44:28 addons-677874 kubelet[1469]: E0818 18:44:28.419338    1469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zxrsl_gadget(744d378f-5523-4ff3-ab7c-e5307c9971ed)\"" pod="gadget/gadget-zxrsl" podUID="744d378f-5523-4ff3-ab7c-e5307c9971ed"
	Aug 18 18:44:35 addons-677874 kubelet[1469]: I0818 18:44:35.418592    1469 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5z977" secret="" err="secret \"gcp-auth\" not found"
	Aug 18 18:44:38 addons-677874 kubelet[1469]: I0818 18:44:38.419180    1469 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-xmzmk" secret="" err="secret \"gcp-auth\" not found"
	Aug 18 18:44:40 addons-677874 kubelet[1469]: I0818 18:44:40.419174    1469 scope.go:117] "RemoveContainer" containerID="74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f"
	Aug 18 18:44:40 addons-677874 kubelet[1469]: E0818 18:44:40.419431    1469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zxrsl_gadget(744d378f-5523-4ff3-ab7c-e5307c9971ed)\"" pod="gadget/gadget-zxrsl" podUID="744d378f-5523-4ff3-ab7c-e5307c9971ed"
	Aug 18 18:44:55 addons-677874 kubelet[1469]: I0818 18:44:55.419130    1469 scope.go:117] "RemoveContainer" containerID="74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f"
	Aug 18 18:44:55 addons-677874 kubelet[1469]: E0818 18:44:55.419322    1469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zxrsl_gadget(744d378f-5523-4ff3-ab7c-e5307c9971ed)\"" pod="gadget/gadget-zxrsl" podUID="744d378f-5523-4ff3-ab7c-e5307c9971ed"
	Aug 18 18:44:55 addons-677874 kubelet[1469]: I0818 18:44:55.419772    1469 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kdttv" secret="" err="secret \"gcp-auth\" not found"
	Aug 18 18:45:06 addons-677874 kubelet[1469]: I0818 18:45:06.419075    1469 scope.go:117] "RemoveContainer" containerID="74c9e849c39cdcc3db3c090750d83ed91c19173f8f97e6f7767fdb32d693264f"
	Aug 18 18:45:06 addons-677874 kubelet[1469]: E0818 18:45:06.419753    1469 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-zxrsl_gadget(744d378f-5523-4ff3-ab7c-e5307c9971ed)\"" pod="gadget/gadget-zxrsl" podUID="744d378f-5523-4ff3-ab7c-e5307c9971ed"
	
	
	==> storage-provisioner [9c21e07f378cf5eb7955b806ba570b51608edad4f372064f60482bc986e1f62e] <==
	I0818 18:39:19.812841       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 18:39:19.845016       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 18:39:19.845075       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 18:39:19.859710       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 18:39:19.859989       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-677874_7f5774e0-467c-4093-8869-4274a77d0bb3!
	I0818 18:39:19.860899       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46bc792c-5926-4f54-ac0f-021315e4a20d", APIVersion:"v1", ResourceVersion:"602", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-677874_7f5774e0-467c-4093-8869-4274a77d0bb3 became leader
	I0818 18:39:19.960742       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-677874_7f5774e0-467c-4093-8869-4274a77d0bb3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-677874 -n addons-677874
helpers_test.go:261: (dbg) Run:  kubectl --context addons-677874 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-m6dhw ingress-nginx-admission-patch-w6zd4 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-677874 describe pod ingress-nginx-admission-create-m6dhw ingress-nginx-admission-patch-w6zd4 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-677874 describe pod ingress-nginx-admission-create-m6dhw ingress-nginx-admission-patch-w6zd4 test-job-nginx-0: exit status 1 (91.282557ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-m6dhw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-w6zd4" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-677874 describe pod ingress-nginx-admission-create-m6dhw ingress-nginx-admission-patch-w6zd4 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (376.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-216078 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0818 19:29:10.146003  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-216078 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m12.488716083s)

                                                
                                                
-- stdout --
	* [old-k8s-version-216078] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-216078" primary control-plane node in "old-k8s-version-216078" cluster
	* Pulling base image v0.0.44-1723740748-19452 ...
	* Restarting existing docker container for "old-k8s-version-216078" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-216078 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:28:16.256452  365394 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:28:16.256752  365394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:28:16.256784  365394 out.go:358] Setting ErrFile to fd 2...
	I0818 19:28:16.256805  365394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:28:16.257111  365394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
	I0818 19:28:16.257553  365394 out.go:352] Setting JSON to false
	I0818 19:28:16.258602  365394 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":101441,"bootTime":1723907856,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0818 19:28:16.258698  365394 start.go:139] virtualization:  
	I0818 19:28:16.262725  365394 out.go:177] * [old-k8s-version-216078] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0818 19:28:16.264642  365394 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:28:16.264720  365394 notify.go:220] Checking for updates...
	I0818 19:28:16.271487  365394 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:28:16.273639  365394 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	I0818 19:28:16.275419  365394 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	I0818 19:28:16.276979  365394 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0818 19:28:16.278695  365394 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:28:16.281412  365394 config.go:182] Loaded profile config "old-k8s-version-216078": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0818 19:28:16.283890  365394 out.go:177] * Kubernetes 1.31.0 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.0
	I0818 19:28:16.285782  365394 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:28:16.311971  365394 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0818 19:28:16.312095  365394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 19:28:16.417375  365394 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-18 19:28:16.407410424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 19:28:16.417483  365394 docker.go:307] overlay module found
	I0818 19:28:16.420028  365394 out.go:177] * Using the docker driver based on existing profile
	I0818 19:28:16.422252  365394 start.go:297] selected driver: docker
	I0818 19:28:16.422275  365394 start.go:901] validating driver "docker" against &{Name:old-k8s-version-216078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-216078 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:28:16.422377  365394 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:28:16.422967  365394 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 19:28:16.505161  365394 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:68 SystemTime:2024-08-18 19:28:16.491719167 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 19:28:16.505551  365394 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 19:28:16.505597  365394 cni.go:84] Creating CNI manager for ""
	I0818 19:28:16.505618  365394 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0818 19:28:16.505681  365394 start.go:340] cluster config:
	{Name:old-k8s-version-216078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-216078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:28:16.507715  365394 out.go:177] * Starting "old-k8s-version-216078" primary control-plane node in "old-k8s-version-216078" cluster
	I0818 19:28:16.509782  365394 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0818 19:28:16.511497  365394 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0818 19:28:16.513867  365394 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0818 19:28:16.513951  365394 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0818 19:28:16.513966  365394 cache.go:56] Caching tarball of preloaded images
	I0818 19:28:16.514053  365394 preload.go:172] Found /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 19:28:16.514067  365394 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0818 19:28:16.514179  365394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/config.json ...
	I0818 19:28:16.514406  365394 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	W0818 19:28:16.543381  365394 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0818 19:28:16.543406  365394 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0818 19:28:16.543486  365394 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0818 19:28:16.543509  365394 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0818 19:28:16.543518  365394 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0818 19:28:16.543525  365394 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0818 19:28:16.543532  365394 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0818 19:28:16.665787  365394 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0818 19:28:16.665825  365394 cache.go:194] Successfully downloaded all kic artifacts
	I0818 19:28:16.665862  365394 start.go:360] acquireMachinesLock for old-k8s-version-216078: {Name:mk9b9141f9b82034bdca9129b9fbed209f3e897c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:28:16.665931  365394 start.go:364] duration metric: took 42.068µs to acquireMachinesLock for "old-k8s-version-216078"
	I0818 19:28:16.665959  365394 start.go:96] Skipping create...Using existing machine configuration
	I0818 19:28:16.665967  365394 fix.go:54] fixHost starting: 
	I0818 19:28:16.666242  365394 cli_runner.go:164] Run: docker container inspect old-k8s-version-216078 --format={{.State.Status}}
	I0818 19:28:16.682233  365394 fix.go:112] recreateIfNeeded on old-k8s-version-216078: state=Stopped err=<nil>
	W0818 19:28:16.682265  365394 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 19:28:16.684429  365394 out.go:177] * Restarting existing docker container for "old-k8s-version-216078" ...
	I0818 19:28:16.686564  365394 cli_runner.go:164] Run: docker start old-k8s-version-216078
	I0818 19:28:17.048638  365394 cli_runner.go:164] Run: docker container inspect old-k8s-version-216078 --format={{.State.Status}}
	I0818 19:28:17.079010  365394 kic.go:430] container "old-k8s-version-216078" state is running.
	I0818 19:28:17.079419  365394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-216078
	I0818 19:28:17.103416  365394 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/config.json ...
	I0818 19:28:17.103646  365394 machine.go:93] provisionDockerMachine start ...
	I0818 19:28:17.103713  365394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-216078
	I0818 19:28:17.125430  365394 main.go:141] libmachine: Using SSH client type: native
	I0818 19:28:17.125717  365394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38614 <nil> <nil>}
	I0818 19:28:17.125734  365394 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 19:28:17.128320  365394 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0818 19:28:20.271872  365394 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-216078
	
	I0818 19:28:20.271895  365394 ubuntu.go:169] provisioning hostname "old-k8s-version-216078"
	I0818 19:28:20.271957  365394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-216078
	I0818 19:28:20.294135  365394 main.go:141] libmachine: Using SSH client type: native
	I0818 19:28:20.294394  365394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38614 <nil> <nil>}
	I0818 19:28:20.294409  365394 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-216078 && echo "old-k8s-version-216078" | sudo tee /etc/hostname
	I0818 19:28:20.448336  365394 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-216078
	
	I0818 19:28:20.448502  365394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-216078
	I0818 19:28:20.472596  365394 main.go:141] libmachine: Using SSH client type: native
	I0818 19:28:20.472862  365394 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38614 <nil> <nil>}
	I0818 19:28:20.472880  365394 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-216078' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-216078/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-216078' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 19:28:20.616194  365394 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 19:28:20.616271  365394 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-154159/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-154159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-154159/.minikube}
	I0818 19:28:20.616312  365394 ubuntu.go:177] setting up certificates
	I0818 19:28:20.616352  365394 provision.go:84] configureAuth start
	I0818 19:28:20.616457  365394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-216078
	I0818 19:28:20.638277  365394 provision.go:143] copyHostCerts
	I0818 19:28:20.638345  365394 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-154159/.minikube/ca.pem, removing ...
	I0818 19:28:20.638354  365394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-154159/.minikube/ca.pem
	I0818 19:28:20.638427  365394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-154159/.minikube/ca.pem (1082 bytes)
	I0818 19:28:20.638521  365394 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-154159/.minikube/cert.pem, removing ...
	I0818 19:28:20.638526  365394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-154159/.minikube/cert.pem
	I0818 19:28:20.638551  365394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-154159/.minikube/cert.pem (1123 bytes)
	I0818 19:28:20.638602  365394 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-154159/.minikube/key.pem, removing ...
	I0818 19:28:20.638607  365394 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-154159/.minikube/key.pem
	I0818 19:28:20.638628  365394 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-154159/.minikube/key.pem (1675 bytes)
	I0818 19:28:20.638674  365394 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-154159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-216078 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-216078]
	I0818 19:28:20.879095  365394 provision.go:177] copyRemoteCerts
	I0818 19:28:20.879211  365394 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 19:28:20.879295  365394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-216078
	I0818 19:28:20.896493  365394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38614 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/old-k8s-version-216078/id_rsa Username:docker}
	I0818 19:28:20.997670  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 19:28:21.029046  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0818 19:28:21.058340  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 19:28:21.090800  365394 provision.go:87] duration metric: took 474.416511ms to configureAuth
	I0818 19:28:21.090870  365394 ubuntu.go:193] setting minikube options for container-runtime
	I0818 19:28:21.091109  365394 config.go:182] Loaded profile config "old-k8s-version-216078": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0818 19:28:21.091142  365394 machine.go:96] duration metric: took 3.987479614s to provisionDockerMachine
	I0818 19:28:21.091165  365394 start.go:293] postStartSetup for "old-k8s-version-216078" (driver="docker")
	I0818 19:28:21.091191  365394 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 19:28:21.091279  365394 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 19:28:21.091352  365394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-216078
	I0818 19:28:21.114992  365394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38614 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/old-k8s-version-216078/id_rsa Username:docker}
	I0818 19:28:21.217636  365394 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 19:28:21.221215  365394 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0818 19:28:21.221250  365394 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0818 19:28:21.221262  365394 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0818 19:28:21.221269  365394 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0818 19:28:21.221279  365394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-154159/.minikube/addons for local assets ...
	I0818 19:28:21.221332  365394 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-154159/.minikube/files for local assets ...
	I0818 19:28:21.221446  365394 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-154159/.minikube/files/etc/ssl/certs/1595492.pem -> 1595492.pem in /etc/ssl/certs
	I0818 19:28:21.221553  365394 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 19:28:21.230883  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/files/etc/ssl/certs/1595492.pem --> /etc/ssl/certs/1595492.pem (1708 bytes)
	I0818 19:28:21.256737  365394 start.go:296] duration metric: took 165.542979ms for postStartSetup
	I0818 19:28:21.256847  365394 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:28:21.256891  365394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-216078
	I0818 19:28:21.277738  365394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38614 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/old-k8s-version-216078/id_rsa Username:docker}
	I0818 19:28:21.374251  365394 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0818 19:28:21.382086  365394 fix.go:56] duration metric: took 4.716113097s for fixHost
	I0818 19:28:21.382113  365394 start.go:83] releasing machines lock for "old-k8s-version-216078", held for 4.716167907s
	I0818 19:28:21.382179  365394 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-216078
	I0818 19:28:21.413408  365394 ssh_runner.go:195] Run: cat /version.json
	I0818 19:28:21.413456  365394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-216078
	I0818 19:28:21.413700  365394 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 19:28:21.413755  365394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-216078
	I0818 19:28:21.442451  365394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38614 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/old-k8s-version-216078/id_rsa Username:docker}
	I0818 19:28:21.502583  365394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38614 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/old-k8s-version-216078/id_rsa Username:docker}
	I0818 19:28:21.730947  365394 ssh_runner.go:195] Run: systemctl --version
	I0818 19:28:21.735285  365394 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 19:28:21.741992  365394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0818 19:28:21.761547  365394 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0818 19:28:21.761628  365394 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 19:28:21.774620  365394 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0818 19:28:21.774649  365394 start.go:495] detecting cgroup driver to use...
	I0818 19:28:21.774683  365394 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0818 19:28:21.774752  365394 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 19:28:21.788100  365394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 19:28:21.803288  365394 docker.go:217] disabling cri-docker service (if available) ...
	I0818 19:28:21.803380  365394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 19:28:21.815678  365394 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 19:28:21.828921  365394 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 19:28:21.918517  365394 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 19:28:22.013352  365394 docker.go:233] disabling docker service ...
	I0818 19:28:22.013445  365394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 19:28:22.028497  365394 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 19:28:22.042002  365394 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0818 19:28:22.143545  365394 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0818 19:28:22.234863  365394 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0818 19:28:22.246557  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 19:28:22.265330  365394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0818 19:28:22.277148  365394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 19:28:22.286735  365394 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 19:28:22.286804  365394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 19:28:22.296294  365394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 19:28:22.308440  365394 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 19:28:22.318814  365394 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 19:28:22.329651  365394 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 19:28:22.338792  365394 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 19:28:22.348605  365394 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 19:28:22.357765  365394 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 19:28:22.369139  365394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:28:22.460411  365394 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 19:28:22.625527  365394 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0818 19:28:22.625616  365394 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0818 19:28:22.634054  365394 start.go:563] Will wait 60s for crictl version
	I0818 19:28:22.634120  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:28:22.637913  365394 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 19:28:22.677279  365394 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0818 19:28:22.677348  365394 ssh_runner.go:195] Run: containerd --version
	I0818 19:28:22.700128  365394 ssh_runner.go:195] Run: containerd --version
	I0818 19:28:22.729067  365394 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.20 ...
	I0818 19:28:22.731345  365394 cli_runner.go:164] Run: docker network inspect old-k8s-version-216078 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0818 19:28:22.750298  365394 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0818 19:28:22.753931  365394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 19:28:22.764515  365394 kubeadm.go:883] updating cluster {Name:old-k8s-version-216078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-216078 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 19:28:22.764634  365394 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0818 19:28:22.764689  365394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:28:22.811544  365394 containerd.go:627] all images are preloaded for containerd runtime.
	I0818 19:28:22.811572  365394 containerd.go:534] Images already preloaded, skipping extraction
	I0818 19:28:22.811668  365394 ssh_runner.go:195] Run: sudo crictl images --output json
	I0818 19:28:22.849076  365394 containerd.go:627] all images are preloaded for containerd runtime.
	I0818 19:28:22.849143  365394 cache_images.go:84] Images are preloaded, skipping loading
	I0818 19:28:22.849159  365394 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0818 19:28:22.849275  365394 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-216078 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-216078 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 19:28:22.849352  365394 ssh_runner.go:195] Run: sudo crictl info
	I0818 19:28:22.902734  365394 cni.go:84] Creating CNI manager for ""
	I0818 19:28:22.902828  365394 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0818 19:28:22.902862  365394 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 19:28:22.902891  365394 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-216078 NodeName:old-k8s-version-216078 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0818 19:28:22.903062  365394 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-216078"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 19:28:22.903138  365394 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0818 19:28:22.916456  365394 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 19:28:22.916564  365394 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 19:28:22.927349  365394 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0818 19:28:22.946316  365394 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 19:28:22.965262  365394 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0818 19:28:22.991547  365394 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0818 19:28:22.995171  365394 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0818 19:28:23.009263  365394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:28:23.109672  365394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 19:28:23.129467  365394 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078 for IP: 192.168.76.2
	I0818 19:28:23.129528  365394 certs.go:194] generating shared ca certs ...
	I0818 19:28:23.129560  365394 certs.go:226] acquiring lock for ca certs: {Name:mk31d70c02908ccfff2b137754f7d1e0b3715b1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:28:23.129723  365394 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-154159/.minikube/ca.key
	I0818 19:28:23.129806  365394 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-154159/.minikube/proxy-client-ca.key
	I0818 19:28:23.129830  365394 certs.go:256] generating profile certs ...
	I0818 19:28:23.129931  365394 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.key
	I0818 19:28:23.130022  365394 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/apiserver.key.95d38173
	I0818 19:28:23.130087  365394 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/proxy-client.key
	I0818 19:28:23.130233  365394 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/159549.pem (1338 bytes)
	W0818 19:28:23.130297  365394 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-154159/.minikube/certs/159549_empty.pem, impossibly tiny 0 bytes
	I0818 19:28:23.130322  365394 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca-key.pem (1679 bytes)
	I0818 19:28:23.130369  365394 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem (1082 bytes)
	I0818 19:28:23.130434  365394 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/cert.pem (1123 bytes)
	I0818 19:28:23.130483  365394 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/key.pem (1675 bytes)
	I0818 19:28:23.130548  365394 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-154159/.minikube/files/etc/ssl/certs/1595492.pem (1708 bytes)
	I0818 19:28:23.131224  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 19:28:23.164954  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0818 19:28:23.197793  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 19:28:23.228481  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 19:28:23.258505  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0818 19:28:23.295591  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 19:28:23.332470  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 19:28:23.362454  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0818 19:28:23.390365  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/files/etc/ssl/certs/1595492.pem --> /usr/share/ca-certificates/1595492.pem (1708 bytes)
	I0818 19:28:23.417113  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 19:28:23.445393  365394 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/certs/159549.pem --> /usr/share/ca-certificates/159549.pem (1338 bytes)
	I0818 19:28:23.471539  365394 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 19:28:23.491253  365394 ssh_runner.go:195] Run: openssl version
	I0818 19:28:23.497213  365394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1595492.pem && ln -fs /usr/share/ca-certificates/1595492.pem /etc/ssl/certs/1595492.pem"
	I0818 19:28:23.506845  365394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1595492.pem
	I0818 19:28:23.510482  365394 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:48 /usr/share/ca-certificates/1595492.pem
	I0818 19:28:23.510560  365394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1595492.pem
	I0818 19:28:23.517422  365394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1595492.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 19:28:23.527435  365394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 19:28:23.537580  365394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:28:23.543910  365394 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:28:23.543973  365394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 19:28:23.551271  365394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 19:28:23.560251  365394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/159549.pem && ln -fs /usr/share/ca-certificates/159549.pem /etc/ssl/certs/159549.pem"
	I0818 19:28:23.569623  365394 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/159549.pem
	I0818 19:28:23.573375  365394 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:48 /usr/share/ca-certificates/159549.pem
	I0818 19:28:23.573474  365394 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/159549.pem
	I0818 19:28:23.583519  365394 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/159549.pem /etc/ssl/certs/51391683.0"
	I0818 19:28:23.592707  365394 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 19:28:23.598404  365394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 19:28:23.605965  365394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 19:28:23.613580  365394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 19:28:23.620292  365394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 19:28:23.626795  365394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 19:28:23.633433  365394 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 19:28:23.640195  365394 kubeadm.go:392] StartCluster: {Name:old-k8s-version-216078 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-216078 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:28:23.640304  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0818 19:28:23.640378  365394 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0818 19:28:23.683873  365394 cri.go:89] found id: "f7dac504cb498237ca5f7b6a524527cefe7917920a3d413a7c4cd970c819a894"
	I0818 19:28:23.683898  365394 cri.go:89] found id: "80ee1e729513b71685fa053c5499a375152679a2ba2012d6cc5008ba2faab62c"
	I0818 19:28:23.683904  365394 cri.go:89] found id: "f4df0f963919a929cc762d019dd1d149a386596ccd181272d8750d152ffd40c0"
	I0818 19:28:23.683909  365394 cri.go:89] found id: "905b265420907d5b4b8ee43b00a698a67557ffce65488b60828117b20fde26b1"
	I0818 19:28:23.683912  365394 cri.go:89] found id: "c1f327db25afffcb7632d994277e73b15d44e051118ecff71b21299a38bcbe10"
	I0818 19:28:23.683916  365394 cri.go:89] found id: "dac35e1e30e3e5a45ee7083d88bb89388a2e328ba629d84ddaa0317c718721e1"
	I0818 19:28:23.683919  365394 cri.go:89] found id: "00fecac5b274921d65e34f858a2d0fb18402e84618ee482b50df18739b0495ad"
	I0818 19:28:23.683922  365394 cri.go:89] found id: "a80fdd47f7301370c07df42778e346c5b3f9a5082205cddd92243b2292907316"
	I0818 19:28:23.683931  365394 cri.go:89] found id: ""
	I0818 19:28:23.683984  365394 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0818 19:28:23.696174  365394 cri.go:116] JSON = null
	W0818 19:28:23.696249  365394 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0818 19:28:23.696332  365394 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 19:28:23.707868  365394 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 19:28:23.707889  365394 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 19:28:23.707940  365394 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 19:28:23.716465  365394 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:28:23.716889  365394 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-216078" does not appear in /home/jenkins/minikube-integration/19423-154159/kubeconfig
	I0818 19:28:23.717005  365394 kubeconfig.go:62] /home/jenkins/minikube-integration/19423-154159/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-216078" cluster setting kubeconfig missing "old-k8s-version-216078" context setting]
	I0818 19:28:23.717286  365394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/kubeconfig: {Name:mk1cf742f712a0c2eee94d91acebc845c12c0cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:28:23.718504  365394 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 19:28:23.730379  365394 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0818 19:28:23.730414  365394 kubeadm.go:597] duration metric: took 22.517902ms to restartPrimaryControlPlane
	I0818 19:28:23.730425  365394 kubeadm.go:394] duration metric: took 90.240995ms to StartCluster
	I0818 19:28:23.730440  365394 settings.go:142] acquiring lock: {Name:mk0e4cdfcdf22fb3f19678cc9275d3e9545c0e60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:28:23.730516  365394 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-154159/kubeconfig
	I0818 19:28:23.731136  365394 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/kubeconfig: {Name:mk1cf742f712a0c2eee94d91acebc845c12c0cf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 19:28:23.731371  365394 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0818 19:28:23.731601  365394 config.go:182] Loaded profile config "old-k8s-version-216078": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0818 19:28:23.731641  365394 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 19:28:23.731712  365394 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-216078"
	I0818 19:28:23.731734  365394 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-216078"
	W0818 19:28:23.731740  365394 addons.go:243] addon storage-provisioner should already be in state true
	I0818 19:28:23.731766  365394 host.go:66] Checking if "old-k8s-version-216078" exists ...
	I0818 19:28:23.732244  365394 cli_runner.go:164] Run: docker container inspect old-k8s-version-216078 --format={{.State.Status}}
	I0818 19:28:23.732657  365394 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-216078"
	I0818 19:28:23.732691  365394 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-216078"
	W0818 19:28:23.732701  365394 addons.go:243] addon metrics-server should already be in state true
	I0818 19:28:23.732732  365394 host.go:66] Checking if "old-k8s-version-216078" exists ...
	I0818 19:28:23.733137  365394 cli_runner.go:164] Run: docker container inspect old-k8s-version-216078 --format={{.State.Status}}
	I0818 19:28:23.734557  365394 out.go:177] * Verifying Kubernetes components...
	I0818 19:28:23.734879  365394 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-216078"
	I0818 19:28:23.734945  365394 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-216078"
	I0818 19:28:23.736432  365394 addons.go:69] Setting dashboard=true in profile "old-k8s-version-216078"
	I0818 19:28:23.736475  365394 addons.go:234] Setting addon dashboard=true in "old-k8s-version-216078"
	W0818 19:28:23.736507  365394 addons.go:243] addon dashboard should already be in state true
	I0818 19:28:23.736539  365394 host.go:66] Checking if "old-k8s-version-216078" exists ...
	I0818 19:28:23.737024  365394 cli_runner.go:164] Run: docker container inspect old-k8s-version-216078 --format={{.State.Status}}
	I0818 19:28:23.737879  365394 cli_runner.go:164] Run: docker container inspect old-k8s-version-216078 --format={{.State.Status}}
	I0818 19:28:23.742810  365394 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 19:28:23.783913  365394 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 19:28:23.785986  365394 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 19:28:23.786008  365394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 19:28:23.786071  365394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-216078
	I0818 19:28:23.789529  365394 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0818 19:28:23.794694  365394 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0818 19:28:23.794729  365394 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0818 19:28:23.794803  365394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-216078
	I0818 19:28:23.824747  365394 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0818 19:28:23.835692  365394 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-216078"
	W0818 19:28:23.835716  365394 addons.go:243] addon default-storageclass should already be in state true
	I0818 19:28:23.835740  365394 host.go:66] Checking if "old-k8s-version-216078" exists ...
	I0818 19:28:23.836197  365394 cli_runner.go:164] Run: docker container inspect old-k8s-version-216078 --format={{.State.Status}}
	I0818 19:28:23.847859  365394 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0818 19:28:23.850044  365394 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0818 19:28:23.850065  365394 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0818 19:28:23.850137  365394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-216078
	I0818 19:28:23.865688  365394 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 19:28:23.865713  365394 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 19:28:23.865779  365394 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-216078
	I0818 19:28:23.878851  365394 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 19:28:23.880739  365394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38614 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/old-k8s-version-216078/id_rsa Username:docker}
	I0818 19:28:23.891969  365394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38614 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/old-k8s-version-216078/id_rsa Username:docker}
	I0818 19:28:23.907221  365394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38614 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/old-k8s-version-216078/id_rsa Username:docker}
	I0818 19:28:23.915151  365394 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38614 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/old-k8s-version-216078/id_rsa Username:docker}
	I0818 19:28:23.955877  365394 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-216078" to be "Ready" ...
	I0818 19:28:24.029281  365394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0818 19:28:24.029350  365394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0818 19:28:24.034544  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 19:28:24.057785  365394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0818 19:28:24.057830  365394 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0818 19:28:24.088590  365394 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0818 19:28:24.088617  365394 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0818 19:28:24.099030  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 19:28:24.132157  365394 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 19:28:24.132184  365394 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0818 19:28:24.139408  365394 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0818 19:28:24.139444  365394 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0818 19:28:24.181395  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0818 19:28:24.191750  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.191811  365394 retry.go:31] will retry after 274.098508ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.248721  365394 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0818 19:28:24.248762  365394 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0818 19:28:24.281154  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.281188  365394 retry.go:31] will retry after 186.5229ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.296264  365394 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0818 19:28:24.296291  365394 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0818 19:28:24.315729  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.315762  365394 retry.go:31] will retry after 353.745978ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.319541  365394 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0818 19:28:24.319568  365394 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0818 19:28:24.337999  365394 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0818 19:28:24.338068  365394 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0818 19:28:24.356280  365394 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0818 19:28:24.356305  365394 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0818 19:28:24.374748  365394 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0818 19:28:24.374810  365394 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0818 19:28:24.393182  365394 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0818 19:28:24.393205  365394 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0818 19:28:24.412348  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0818 19:28:24.467086  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 19:28:24.468205  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0818 19:28:24.513690  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.513721  365394 retry.go:31] will retry after 297.303431ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0818 19:28:24.599678  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.599716  365394 retry.go:31] will retry after 418.670962ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0818 19:28:24.599750  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.599760  365394 retry.go:31] will retry after 519.310011ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.669995  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0818 19:28:24.750350  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.750386  365394 retry.go:31] will retry after 433.957523ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.811778  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0818 19:28:24.920226  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:24.920259  365394 retry.go:31] will retry after 546.190672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:25.018961  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 19:28:25.119810  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0818 19:28:25.184780  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0818 19:28:25.224220  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:25.224259  365394 retry.go:31] will retry after 301.18108ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0818 19:28:25.306139  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:25.306171  365394 retry.go:31] will retry after 798.351817ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0818 19:28:25.318066  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:25.318099  365394 retry.go:31] will retry after 317.642649ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:25.467003  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0818 19:28:25.525932  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0818 19:28:25.579288  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:25.579324  365394 retry.go:31] will retry after 558.227574ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:25.636607  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0818 19:28:25.722996  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:25.723032  365394 retry.go:31] will retry after 926.967076ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0818 19:28:25.907777  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:25.907825  365394 retry.go:31] will retry after 1.132009601s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:25.956474  365394 node_ready.go:53] error getting node "old-k8s-version-216078": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-216078": dial tcp 192.168.76.2:8443: connect: connection refused
	I0818 19:28:26.104667  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0818 19:28:26.137911  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0818 19:28:26.252855  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:26.252909  365394 retry.go:31] will retry after 1.039958798s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0818 19:28:26.328205  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:26.328239  365394 retry.go:31] will retry after 513.95598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:26.650171  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0818 19:28:26.738548  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:26.738580  365394 retry.go:31] will retry after 683.511251ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:26.842820  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0818 19:28:26.942718  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:26.942747  365394 retry.go:31] will retry after 927.13556ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:27.040809  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0818 19:28:27.192715  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:27.192745  365394 retry.go:31] will retry after 1.583782834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:27.294040  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0818 19:28:27.386415  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:27.386444  365394 retry.go:31] will retry after 1.316723762s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:27.422760  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0818 19:28:27.523898  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:27.524006  365394 retry.go:31] will retry after 1.225213217s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:27.870480  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0818 19:28:27.957147  365394 node_ready.go:53] error getting node "old-k8s-version-216078": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-216078": dial tcp 192.168.76.2:8443: connect: connection refused
	W0818 19:28:27.961689  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:27.961720  365394 retry.go:31] will retry after 1.738176148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:28.703840  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0818 19:28:28.750117  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 19:28:28.777388  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0818 19:28:28.812146  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:28.812175  365394 retry.go:31] will retry after 1.23296768s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0818 19:28:28.986506  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:28.986535  365394 retry.go:31] will retry after 3.879002696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0818 19:28:28.986573  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:28.986580  365394 retry.go:31] will retry after 1.276576491s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:29.700811  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0818 19:28:29.795254  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:29.795285  365394 retry.go:31] will retry after 2.205868674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:30.048225  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0818 19:28:30.221329  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:30.221361  365394 retry.go:31] will retry after 2.872198088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:30.263718  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0818 19:28:30.363282  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:30.363314  365394 retry.go:31] will retry after 2.178284097s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:30.456920  365394 node_ready.go:53] error getting node "old-k8s-version-216078": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-216078": dial tcp 192.168.76.2:8443: connect: connection refused
	I0818 19:28:32.001289  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0818 19:28:32.104342  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:32.104369  365394 retry.go:31] will retry after 4.324393365s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:32.457278  365394 node_ready.go:53] error getting node "old-k8s-version-216078": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-216078": dial tcp 192.168.76.2:8443: connect: connection refused
	I0818 19:28:32.542586  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0818 19:28:32.763640  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:32.763670  365394 retry.go:31] will retry after 5.550464785s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:32.865887  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 19:28:33.093972  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0818 19:28:33.288869  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:33.288909  365394 retry.go:31] will retry after 3.621098907s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0818 19:28:33.495930  365394 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:33.495959  365394 retry.go:31] will retry after 3.167914032s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0818 19:28:36.428984  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0818 19:28:36.664749  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0818 19:28:36.910194  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 19:28:38.315108  365394 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0818 19:28:43.957591  365394 node_ready.go:53] error getting node "old-k8s-version-216078": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-216078": net/http: TLS handshake timeout
	I0818 19:28:45.997266  365394 node_ready.go:49] node "old-k8s-version-216078" has status "Ready":"True"
	I0818 19:28:45.997289  365394 node_ready.go:38] duration metric: took 22.04137427s for node "old-k8s-version-216078" to be "Ready" ...
	I0818 19:28:45.997299  365394 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 19:28:46.354164  365394 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-vvtgz" in "kube-system" namespace to be "Ready" ...
	I0818 19:28:46.625285  365394 pod_ready.go:93] pod "coredns-74ff55c5b-vvtgz" in "kube-system" namespace has status "Ready":"True"
	I0818 19:28:46.625357  365394 pod_ready.go:82] duration metric: took 271.099181ms for pod "coredns-74ff55c5b-vvtgz" in "kube-system" namespace to be "Ready" ...
	I0818 19:28:46.625382  365394 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-216078" in "kube-system" namespace to be "Ready" ...
	I0818 19:28:48.638291  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:28:48.945463  365394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (12.2806759s)
	I0818 19:28:48.945686  365394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (12.035468934s)
	I0818 19:28:48.945760  365394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.630628821s)
	I0818 19:28:48.945775  365394 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-216078"
	I0818 19:28:48.945823  365394 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.5168061s)
	I0818 19:28:48.949318  365394 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-216078 addons enable metrics-server
	
	I0818 19:28:48.954812  365394 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I0818 19:28:48.956780  365394 addons.go:510] duration metric: took 25.225124735s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I0818 19:28:51.134053  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:28:53.631500  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:28:55.634241  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:28:58.132967  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:00.151360  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:02.631656  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:04.693139  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:07.132804  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:09.632512  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:12.142696  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:14.631245  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:16.632179  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:18.632379  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:21.133658  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:23.134949  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:25.631396  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:28.131696  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:30.132127  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:32.132678  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:34.132867  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:36.180870  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:38.632159  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:40.632379  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:42.633402  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:45.133471  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:47.135990  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:49.633217  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:51.640856  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:54.133075  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:56.632009  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:29:58.632349  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:00.633785  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:03.132478  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:05.632865  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:08.132900  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:10.631985  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:13.131586  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:15.133127  365394 pod_ready.go:103] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:16.131560  365394 pod_ready.go:93] pod "etcd-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"True"
	I0818 19:30:16.131587  365394 pod_ready.go:82] duration metric: took 1m29.506182017s for pod "etcd-old-k8s-version-216078" in "kube-system" namespace to be "Ready" ...
	I0818 19:30:16.131603  365394 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-216078" in "kube-system" namespace to be "Ready" ...
	I0818 19:30:16.137187  365394 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"True"
	I0818 19:30:16.137215  365394 pod_ready.go:82] duration metric: took 5.603096ms for pod "kube-apiserver-old-k8s-version-216078" in "kube-system" namespace to be "Ready" ...
	I0818 19:30:16.137227  365394 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-216078" in "kube-system" namespace to be "Ready" ...
	I0818 19:30:16.142670  365394 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"True"
	I0818 19:30:16.142699  365394 pod_ready.go:82] duration metric: took 5.464791ms for pod "kube-controller-manager-old-k8s-version-216078" in "kube-system" namespace to be "Ready" ...
	I0818 19:30:16.142711  365394 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-n9glb" in "kube-system" namespace to be "Ready" ...
	I0818 19:30:16.148075  365394 pod_ready.go:93] pod "kube-proxy-n9glb" in "kube-system" namespace has status "Ready":"True"
	I0818 19:30:16.148097  365394 pod_ready.go:82] duration metric: took 5.3784ms for pod "kube-proxy-n9glb" in "kube-system" namespace to be "Ready" ...
	I0818 19:30:16.148108  365394 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-216078" in "kube-system" namespace to be "Ready" ...
	I0818 19:30:16.153170  365394 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-216078" in "kube-system" namespace has status "Ready":"True"
	I0818 19:30:16.153193  365394 pod_ready.go:82] duration metric: took 5.077404ms for pod "kube-scheduler-old-k8s-version-216078" in "kube-system" namespace to be "Ready" ...
	I0818 19:30:16.153205  365394 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace to be "Ready" ...
	I0818 19:30:18.159444  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:20.658805  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:22.659119  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:24.659361  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:26.659633  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:29.160376  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:31.160428  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:33.658986  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:35.660424  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:38.159303  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:40.160672  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:42.165873  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:44.659138  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:46.719028  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:49.159300  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:51.159552  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:53.160508  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:55.160797  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:57.659679  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:30:59.660030  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:01.667587  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:04.160120  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:06.661038  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:09.159406  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:11.159873  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:13.160968  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:15.659657  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:17.659766  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:20.160774  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:22.161059  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:24.659329  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:27.160302  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:29.661610  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:32.160382  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:34.658853  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:36.660658  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:39.159309  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:41.160598  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:43.658732  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:45.659263  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:47.659838  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:50.159885  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:52.160234  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:54.659861  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:56.660403  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:31:59.159326  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:01.159471  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:03.160341  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:05.660515  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:08.160071  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:10.161151  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:12.660275  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:15.159961  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:17.659024  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:20.159439  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:22.160038  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:24.160156  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:26.659676  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:29.159547  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:31.659019  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:34.161582  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:36.659504  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:39.159972  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:41.660264  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:44.159742  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:46.659441  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:48.659757  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:51.160016  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:53.658678  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:55.659947  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:32:58.159336  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:00.178292  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:02.660110  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:05.160170  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:07.168917  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:09.660628  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:12.159904  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:14.659581  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:17.166067  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:19.666334  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:22.159726  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:24.160172  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:26.659028  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:28.659584  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:31.159720  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:33.167274  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:35.659876  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:38.161316  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:40.659653  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:43.159333  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:45.162436  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:47.660004  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:50.165408  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:52.660266  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:55.160748  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:57.659541  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:33:59.660186  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:34:02.159314  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:34:04.160513  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:34:06.659607  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:34:08.659655  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:34:11.159587  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:34:13.168138  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:34:15.178986  365394 pod_ready.go:103] pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace has status "Ready":"False"
	I0818 19:34:16.161288  365394 pod_ready.go:82] duration metric: took 4m0.008068993s for pod "metrics-server-9975d5f86-5k9lw" in "kube-system" namespace to be "Ready" ...
	E0818 19:34:16.161316  365394 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0818 19:34:16.161326  365394 pod_ready.go:39] duration metric: took 5m30.164015814s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 19:34:16.161346  365394 api_server.go:52] waiting for apiserver process to appear ...
	I0818 19:34:16.161377  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0818 19:34:16.161438  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0818 19:34:16.243626  365394 cri.go:89] found id: "a70df47e0a4c904f4234489c4b6931a0c40e23791ebbe701bdf79b173c708054"
	I0818 19:34:16.243651  365394 cri.go:89] found id: "a80fdd47f7301370c07df42778e346c5b3f9a5082205cddd92243b2292907316"
	I0818 19:34:16.243668  365394 cri.go:89] found id: ""
	I0818 19:34:16.243676  365394 logs.go:276] 2 containers: [a70df47e0a4c904f4234489c4b6931a0c40e23791ebbe701bdf79b173c708054 a80fdd47f7301370c07df42778e346c5b3f9a5082205cddd92243b2292907316]
	I0818 19:34:16.243729  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.250808  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.255664  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0818 19:34:16.255739  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0818 19:34:16.332699  365394 cri.go:89] found id: "03fbfc746eafcd2d037579ee9352ca8a230e2d686179a89c453eeae19ba4b178"
	I0818 19:34:16.332722  365394 cri.go:89] found id: "dac35e1e30e3e5a45ee7083d88bb89388a2e328ba629d84ddaa0317c718721e1"
	I0818 19:34:16.332727  365394 cri.go:89] found id: ""
	I0818 19:34:16.332749  365394 logs.go:276] 2 containers: [03fbfc746eafcd2d037579ee9352ca8a230e2d686179a89c453eeae19ba4b178 dac35e1e30e3e5a45ee7083d88bb89388a2e328ba629d84ddaa0317c718721e1]
	I0818 19:34:16.332805  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.338994  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.347143  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0818 19:34:16.347323  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 19:34:16.414776  365394 cri.go:89] found id: "9cb4a0a17af824f66e9d214beb9b26be48e63a6e4f9896caf75d310dbc8cd195"
	I0818 19:34:16.414802  365394 cri.go:89] found id: "f7dac504cb498237ca5f7b6a524527cefe7917920a3d413a7c4cd970c819a894"
	I0818 19:34:16.414812  365394 cri.go:89] found id: ""
	I0818 19:34:16.414823  365394 logs.go:276] 2 containers: [9cb4a0a17af824f66e9d214beb9b26be48e63a6e4f9896caf75d310dbc8cd195 f7dac504cb498237ca5f7b6a524527cefe7917920a3d413a7c4cd970c819a894]
	I0818 19:34:16.414925  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.422222  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.428687  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0818 19:34:16.428787  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 19:34:16.509373  365394 cri.go:89] found id: "3f55d380c20ce8abf819eb90f1107f99b6883a3af7d8b0c875d12639b6f5a1a4"
	I0818 19:34:16.509418  365394 cri.go:89] found id: "c1f327db25afffcb7632d994277e73b15d44e051118ecff71b21299a38bcbe10"
	I0818 19:34:16.509429  365394 cri.go:89] found id: ""
	I0818 19:34:16.509438  365394 logs.go:276] 2 containers: [3f55d380c20ce8abf819eb90f1107f99b6883a3af7d8b0c875d12639b6f5a1a4 c1f327db25afffcb7632d994277e73b15d44e051118ecff71b21299a38bcbe10]
	I0818 19:34:16.509519  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.515307  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.519719  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0818 19:34:16.519907  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 19:34:16.598251  365394 cri.go:89] found id: "d9c85aaaa043f149f3f6daad7c932a643b6619f4c4cb7755b152310052480f6f"
	I0818 19:34:16.598273  365394 cri.go:89] found id: "905b265420907d5b4b8ee43b00a698a67557ffce65488b60828117b20fde26b1"
	I0818 19:34:16.598278  365394 cri.go:89] found id: ""
	I0818 19:34:16.598289  365394 logs.go:276] 2 containers: [d9c85aaaa043f149f3f6daad7c932a643b6619f4c4cb7755b152310052480f6f 905b265420907d5b4b8ee43b00a698a67557ffce65488b60828117b20fde26b1]
	I0818 19:34:16.598352  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.605307  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.611188  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 19:34:16.611266  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 19:34:16.674927  365394 cri.go:89] found id: "6b77034b70c13c794e7e71994e90063cf36dd1fa6cbe8d1784b7ceee707c9aca"
	I0818 19:34:16.674951  365394 cri.go:89] found id: "00fecac5b274921d65e34f858a2d0fb18402e84618ee482b50df18739b0495ad"
	I0818 19:34:16.674957  365394 cri.go:89] found id: ""
	I0818 19:34:16.674965  365394 logs.go:276] 2 containers: [6b77034b70c13c794e7e71994e90063cf36dd1fa6cbe8d1784b7ceee707c9aca 00fecac5b274921d65e34f858a2d0fb18402e84618ee482b50df18739b0495ad]
	I0818 19:34:16.675024  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.679400  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.682802  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0818 19:34:16.682868  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 19:34:16.728600  365394 cri.go:89] found id: "136b3774b3686f0bcd42b41a696212f7023654f7c9a799d192302697dae329d1"
	I0818 19:34:16.728621  365394 cri.go:89] found id: "80ee1e729513b71685fa053c5499a375152679a2ba2012d6cc5008ba2faab62c"
	I0818 19:34:16.728626  365394 cri.go:89] found id: ""
	I0818 19:34:16.728633  365394 logs.go:276] 2 containers: [136b3774b3686f0bcd42b41a696212f7023654f7c9a799d192302697dae329d1 80ee1e729513b71685fa053c5499a375152679a2ba2012d6cc5008ba2faab62c]
	I0818 19:34:16.728692  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.732280  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.735563  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0818 19:34:16.735678  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 19:34:16.776231  365394 cri.go:89] found id: "35e10341a264471c2a4ba4fc06d275735b1230d24e7fa77bea16c30257e91109"
	I0818 19:34:16.776254  365394 cri.go:89] found id: "48b1e06eab89513ac1b65a73dd1d7880d6028031dc94ee400b4ceb77169245aa"
	I0818 19:34:16.776259  365394 cri.go:89] found id: ""
	I0818 19:34:16.776267  365394 logs.go:276] 2 containers: [35e10341a264471c2a4ba4fc06d275735b1230d24e7fa77bea16c30257e91109 48b1e06eab89513ac1b65a73dd1d7880d6028031dc94ee400b4ceb77169245aa]
	I0818 19:34:16.776324  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.779901  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.783067  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 19:34:16.783149  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 19:34:16.825413  365394 cri.go:89] found id: "aea462f9a5614878af08a599d3ec3ce490ef7ff8d286df3bad62c54b80d55394"
	I0818 19:34:16.825435  365394 cri.go:89] found id: ""
	I0818 19:34:16.825444  365394 logs.go:276] 1 containers: [aea462f9a5614878af08a599d3ec3ce490ef7ff8d286df3bad62c54b80d55394]
	I0818 19:34:16.825499  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.829000  365394 logs.go:123] Gathering logs for dmesg ...
	I0818 19:34:16.829067  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 19:34:16.845161  365394 logs.go:123] Gathering logs for etcd [03fbfc746eafcd2d037579ee9352ca8a230e2d686179a89c453eeae19ba4b178] ...
	I0818 19:34:16.845193  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03fbfc746eafcd2d037579ee9352ca8a230e2d686179a89c453eeae19ba4b178"
	I0818 19:34:16.899783  365394 logs.go:123] Gathering logs for kube-scheduler [3f55d380c20ce8abf819eb90f1107f99b6883a3af7d8b0c875d12639b6f5a1a4] ...
	I0818 19:34:16.899919  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f55d380c20ce8abf819eb90f1107f99b6883a3af7d8b0c875d12639b6f5a1a4"
	I0818 19:34:16.941128  365394 logs.go:123] Gathering logs for kube-controller-manager [00fecac5b274921d65e34f858a2d0fb18402e84618ee482b50df18739b0495ad] ...
	I0818 19:34:16.941168  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00fecac5b274921d65e34f858a2d0fb18402e84618ee482b50df18739b0495ad"
	I0818 19:34:17.016362  365394 logs.go:123] Gathering logs for storage-provisioner [35e10341a264471c2a4ba4fc06d275735b1230d24e7fa77bea16c30257e91109] ...
	I0818 19:34:17.016400  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35e10341a264471c2a4ba4fc06d275735b1230d24e7fa77bea16c30257e91109"
	I0818 19:34:17.071278  365394 logs.go:123] Gathering logs for kubernetes-dashboard [aea462f9a5614878af08a599d3ec3ce490ef7ff8d286df3bad62c54b80d55394] ...
	I0818 19:34:17.071318  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea462f9a5614878af08a599d3ec3ce490ef7ff8d286df3bad62c54b80d55394"
	I0818 19:34:17.148503  365394 logs.go:123] Gathering logs for kube-proxy [d9c85aaaa043f149f3f6daad7c932a643b6619f4c4cb7755b152310052480f6f] ...
	I0818 19:34:17.148774  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9c85aaaa043f149f3f6daad7c932a643b6619f4c4cb7755b152310052480f6f"
	I0818 19:34:17.219704  365394 logs.go:123] Gathering logs for kube-proxy [905b265420907d5b4b8ee43b00a698a67557ffce65488b60828117b20fde26b1] ...
	I0818 19:34:17.219728  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 905b265420907d5b4b8ee43b00a698a67557ffce65488b60828117b20fde26b1"
	I0818 19:34:17.276392  365394 logs.go:123] Gathering logs for kube-controller-manager [6b77034b70c13c794e7e71994e90063cf36dd1fa6cbe8d1784b7ceee707c9aca] ...
	I0818 19:34:17.276421  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b77034b70c13c794e7e71994e90063cf36dd1fa6cbe8d1784b7ceee707c9aca"
	I0818 19:34:17.403147  365394 logs.go:123] Gathering logs for storage-provisioner [48b1e06eab89513ac1b65a73dd1d7880d6028031dc94ee400b4ceb77169245aa] ...
	I0818 19:34:17.403183  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48b1e06eab89513ac1b65a73dd1d7880d6028031dc94ee400b4ceb77169245aa"
	I0818 19:34:17.450187  365394 logs.go:123] Gathering logs for containerd ...
	I0818 19:34:17.450216  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0818 19:34:17.528357  365394 logs.go:123] Gathering logs for container status ...
	I0818 19:34:17.528397  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 19:34:17.619646  365394 logs.go:123] Gathering logs for kubelet ...
	I0818 19:34:17.619677  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 19:34:17.682312  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.116436     659 reflector.go:138] object-"kube-system"/"kube-proxy-token-gkhsq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gkhsq" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.682589  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.117472     659 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.682950  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.130723     659 reflector.go:138] object-"kube-system"/"storage-provisioner-token-jg9hx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-jg9hx" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.683233  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.130814     659 reflector.go:138] object-"default"/"default-token-79mlg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-79mlg" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.683613  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.137493     659 reflector.go:138] object-"kube-system"/"metrics-server-token-lkhr7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-lkhr7" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.683926  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.137601     659 reflector.go:138] object-"kube-system"/"kindnet-token-pj6jq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-pj6jq" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.684195  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.137667     659 reflector.go:138] object-"kube-system"/"coredns-token-tstz9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tstz9" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.684437  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.137736     659 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.694259  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:49 old-k8s-version-216078 kubelet[659]: E0818 19:28:49.150627     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0818 19:34:17.694481  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:49 old-k8s-version-216078 kubelet[659]: E0818 19:28:49.197339     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.698078  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:59 old-k8s-version-216078 kubelet[659]: E0818 19:28:59.777587     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0818 19:34:17.700642  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:11 old-k8s-version-216078 kubelet[659]: E0818 19:29:11.292167     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.701410  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:12 old-k8s-version-216078 kubelet[659]: E0818 19:29:12.336846     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.701625  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:14 old-k8s-version-216078 kubelet[659]: E0818 19:29:14.740520     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.702008  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:17 old-k8s-version-216078 kubelet[659]: E0818 19:29:17.681494     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.702525  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:20 old-k8s-version-216078 kubelet[659]: E0818 19:29:20.366319     659 pod_workers.go:191] Error syncing pod b224ab7a-edb7-4ab4-9f31-ddafa7172d46 ("storage-provisioner_kube-system(b224ab7a-edb7-4ab4-9f31-ddafa7172d46)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b224ab7a-edb7-4ab4-9f31-ddafa7172d46)"
	W0818 19:34:17.705773  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:27 old-k8s-version-216078 kubelet[659]: E0818 19:29:27.748257     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0818 19:34:17.706438  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:30 old-k8s-version-216078 kubelet[659]: E0818 19:29:30.417852     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.706945  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:37 old-k8s-version-216078 kubelet[659]: E0818 19:29:37.681274     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.707162  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:42 old-k8s-version-216078 kubelet[659]: E0818 19:29:42.772972     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.707534  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:49 old-k8s-version-216078 kubelet[659]: E0818 19:29:49.740124     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.707746  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:56 old-k8s-version-216078 kubelet[659]: E0818 19:29:56.740480     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.708431  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:01 old-k8s-version-216078 kubelet[659]: E0818 19:30:01.513179     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.708780  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:07 old-k8s-version-216078 kubelet[659]: E0818 19:30:07.681775     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.711748  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:10 old-k8s-version-216078 kubelet[659]: E0818 19:30:10.748586     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0818 19:34:17.712177  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:21 old-k8s-version-216078 kubelet[659]: E0818 19:30:21.740932     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.712387  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:25 old-k8s-version-216078 kubelet[659]: E0818 19:30:25.740889     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.712748  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:35 old-k8s-version-216078 kubelet[659]: E0818 19:30:35.740311     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.712948  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:37 old-k8s-version-216078 kubelet[659]: E0818 19:30:37.740695     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.713322  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:50 old-k8s-version-216078 kubelet[659]: E0818 19:30:50.741003     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.713829  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:51 old-k8s-version-216078 kubelet[659]: E0818 19:30:51.639933     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.714214  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:57 old-k8s-version-216078 kubelet[659]: E0818 19:30:57.681246     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.714413  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:05 old-k8s-version-216078 kubelet[659]: E0818 19:31:05.740483     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.714780  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:10 old-k8s-version-216078 kubelet[659]: E0818 19:31:10.740145     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.714995  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:18 old-k8s-version-216078 kubelet[659]: E0818 19:31:18.740514     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.715393  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:23 old-k8s-version-216078 kubelet[659]: E0818 19:31:23.744229     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.715611  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:29 old-k8s-version-216078 kubelet[659]: E0818 19:31:29.741416     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.716019  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:38 old-k8s-version-216078 kubelet[659]: E0818 19:31:38.740103     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.720670  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:40 old-k8s-version-216078 kubelet[659]: E0818 19:31:40.747496     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0818 19:34:17.721103  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:52 old-k8s-version-216078 kubelet[659]: E0818 19:31:52.740064     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.721320  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:52 old-k8s-version-216078 kubelet[659]: E0818 19:31:52.741572     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.721722  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:04 old-k8s-version-216078 kubelet[659]: E0818 19:32:04.740152     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.721938  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:05 old-k8s-version-216078 kubelet[659]: E0818 19:32:05.740613     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.722165  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:16 old-k8s-version-216078 kubelet[659]: E0818 19:32:16.740992     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.722842  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:17 old-k8s-version-216078 kubelet[659]: E0818 19:32:17.876392     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.723219  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:27 old-k8s-version-216078 kubelet[659]: E0818 19:32:27.681615     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.723422  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:31 old-k8s-version-216078 kubelet[659]: E0818 19:32:31.743158     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.723815  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:42 old-k8s-version-216078 kubelet[659]: E0818 19:32:42.740105     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.724033  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:45 old-k8s-version-216078 kubelet[659]: E0818 19:32:45.740578     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.724417  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:57 old-k8s-version-216078 kubelet[659]: E0818 19:32:57.740499     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.724633  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:58 old-k8s-version-216078 kubelet[659]: E0818 19:32:58.740474     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.725032  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:10 old-k8s-version-216078 kubelet[659]: E0818 19:33:10.740134     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.725268  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:12 old-k8s-version-216078 kubelet[659]: E0818 19:33:12.740523     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.725663  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:21 old-k8s-version-216078 kubelet[659]: E0818 19:33:21.744175     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.726002  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:26 old-k8s-version-216078 kubelet[659]: E0818 19:33:26.742186     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.726377  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:33 old-k8s-version-216078 kubelet[659]: E0818 19:33:33.745221     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.726576  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:37 old-k8s-version-216078 kubelet[659]: E0818 19:33:37.740667     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.726949  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:46 old-k8s-version-216078 kubelet[659]: E0818 19:33:46.740184     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.727179  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:48 old-k8s-version-216078 kubelet[659]: E0818 19:33:48.741619     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.727551  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:59 old-k8s-version-216078 kubelet[659]: E0818 19:33:59.740109     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.727776  365394 logs.go:138] Found kubelet problem: Aug 18 19:34:02 old-k8s-version-216078 kubelet[659]: E0818 19:34:02.740456     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.728248  365394 logs.go:138] Found kubelet problem: Aug 18 19:34:10 old-k8s-version-216078 kubelet[659]: E0818 19:34:10.740182     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.728718  365394 logs.go:138] Found kubelet problem: Aug 18 19:34:15 old-k8s-version-216078 kubelet[659]: E0818 19:34:15.741175     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0818 19:34:17.728740  365394 logs.go:123] Gathering logs for kube-apiserver [a80fdd47f7301370c07df42778e346c5b3f9a5082205cddd92243b2292907316] ...
	I0818 19:34:17.728755  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80fdd47f7301370c07df42778e346c5b3f9a5082205cddd92243b2292907316"
	I0818 19:34:17.819506  365394 logs.go:123] Gathering logs for etcd [dac35e1e30e3e5a45ee7083d88bb89388a2e328ba629d84ddaa0317c718721e1] ...
	I0818 19:34:17.819607  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dac35e1e30e3e5a45ee7083d88bb89388a2e328ba629d84ddaa0317c718721e1"
	I0818 19:34:17.873771  365394 logs.go:123] Gathering logs for coredns [9cb4a0a17af824f66e9d214beb9b26be48e63a6e4f9896caf75d310dbc8cd195] ...
	I0818 19:34:17.873809  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cb4a0a17af824f66e9d214beb9b26be48e63a6e4f9896caf75d310dbc8cd195"
	I0818 19:34:17.943754  365394 logs.go:123] Gathering logs for coredns [f7dac504cb498237ca5f7b6a524527cefe7917920a3d413a7c4cd970c819a894] ...
	I0818 19:34:17.943828  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7dac504cb498237ca5f7b6a524527cefe7917920a3d413a7c4cd970c819a894"
	I0818 19:34:17.999633  365394 logs.go:123] Gathering logs for kindnet [136b3774b3686f0bcd42b41a696212f7023654f7c9a799d192302697dae329d1] ...
	I0818 19:34:17.999662  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 136b3774b3686f0bcd42b41a696212f7023654f7c9a799d192302697dae329d1"
	I0818 19:34:18.083570  365394 logs.go:123] Gathering logs for describe nodes ...
	I0818 19:34:18.083614  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 19:34:18.296468  365394 logs.go:123] Gathering logs for kube-apiserver [a70df47e0a4c904f4234489c4b6931a0c40e23791ebbe701bdf79b173c708054] ...
	I0818 19:34:18.296516  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70df47e0a4c904f4234489c4b6931a0c40e23791ebbe701bdf79b173c708054"
	I0818 19:34:18.408946  365394 logs.go:123] Gathering logs for kube-scheduler [c1f327db25afffcb7632d994277e73b15d44e051118ecff71b21299a38bcbe10] ...
	I0818 19:34:18.408985  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1f327db25afffcb7632d994277e73b15d44e051118ecff71b21299a38bcbe10"
	I0818 19:34:18.565143  365394 logs.go:123] Gathering logs for kindnet [80ee1e729513b71685fa053c5499a375152679a2ba2012d6cc5008ba2faab62c] ...
	I0818 19:34:18.565179  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80ee1e729513b71685fa053c5499a375152679a2ba2012d6cc5008ba2faab62c"
	I0818 19:34:18.658398  365394 out.go:358] Setting ErrFile to fd 2...
	I0818 19:34:18.658433  365394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 19:34:18.658675  365394 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0818 19:34:18.658761  365394 out.go:270]   Aug 18 19:33:48 old-k8s-version-216078 kubelet[659]: E0818 19:33:48.741619     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 18 19:33:48 old-k8s-version-216078 kubelet[659]: E0818 19:33:48.741619     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:18.658792  365394 out.go:270]   Aug 18 19:33:59 old-k8s-version-216078 kubelet[659]: E0818 19:33:59.740109     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	  Aug 18 19:33:59 old-k8s-version-216078 kubelet[659]: E0818 19:33:59.740109     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:18.658809  365394 out.go:270]   Aug 18 19:34:02 old-k8s-version-216078 kubelet[659]: E0818 19:34:02.740456     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 18 19:34:02 old-k8s-version-216078 kubelet[659]: E0818 19:34:02.740456     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:18.658836  365394 out.go:270]   Aug 18 19:34:10 old-k8s-version-216078 kubelet[659]: E0818 19:34:10.740182     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	  Aug 18 19:34:10 old-k8s-version-216078 kubelet[659]: E0818 19:34:10.740182     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:18.658848  365394 out.go:270]   Aug 18 19:34:15 old-k8s-version-216078 kubelet[659]: E0818 19:34:15.741175     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Aug 18 19:34:15 old-k8s-version-216078 kubelet[659]: E0818 19:34:15.741175     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0818 19:34:18.658864  365394 out.go:358] Setting ErrFile to fd 2...
	I0818 19:34:18.658873  365394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:34:28.659657  365394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:34:28.677401  365394 api_server.go:72] duration metric: took 6m4.945994591s to wait for apiserver process to appear ...
	I0818 19:34:28.677425  365394 api_server.go:88] waiting for apiserver healthz status ...
	I0818 19:34:28.679981  365394 out.go:201] 
	W0818 19:34:28.681909  365394 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0818 19:34:28.681926  365394 out.go:270] * 
	* 
	W0818 19:34:28.682790  365394 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 19:34:28.684398  365394 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-216078 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-216078
helpers_test.go:235: (dbg) docker inspect old-k8s-version-216078:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4027f4b33bb4ca37c406698bb1856a2f1c16a4c18c58d19b3d099d0206277db7",
	        "Created": "2024-08-18T19:25:29.097327473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 365591,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-18T19:28:16.864292841Z",
	            "FinishedAt": "2024-08-18T19:28:15.64235168Z"
	        },
	        "Image": "sha256:decdd59746a9dba10062a73f6cd4b910c7b4e60613660b1022f8357747681c4d",
	        "ResolvConfPath": "/var/lib/docker/containers/4027f4b33bb4ca37c406698bb1856a2f1c16a4c18c58d19b3d099d0206277db7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4027f4b33bb4ca37c406698bb1856a2f1c16a4c18c58d19b3d099d0206277db7/hostname",
	        "HostsPath": "/var/lib/docker/containers/4027f4b33bb4ca37c406698bb1856a2f1c16a4c18c58d19b3d099d0206277db7/hosts",
	        "LogPath": "/var/lib/docker/containers/4027f4b33bb4ca37c406698bb1856a2f1c16a4c18c58d19b3d099d0206277db7/4027f4b33bb4ca37c406698bb1856a2f1c16a4c18c58d19b3d099d0206277db7-json.log",
	        "Name": "/old-k8s-version-216078",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-216078:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-216078",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8b105a9abeb464767bfe46c6063ee3ad96eb77ff9bfff6d4de991537b2d4e1da-init/diff:/var/lib/docker/overlay2/335569924eb2f5a2927a3aad525f5945de522a21e4174960fd450e8e86ba9355/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b105a9abeb464767bfe46c6063ee3ad96eb77ff9bfff6d4de991537b2d4e1da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b105a9abeb464767bfe46c6063ee3ad96eb77ff9bfff6d4de991537b2d4e1da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b105a9abeb464767bfe46c6063ee3ad96eb77ff9bfff6d4de991537b2d4e1da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-216078",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-216078/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-216078",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-216078",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-216078",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "65978f536a4bcac74dcd69df859c07c200cfa78882d13bf6e736112c109a6bcd",
	            "SandboxKey": "/var/run/docker/netns/65978f536a4b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38614"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38615"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38618"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38616"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "38617"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-216078": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ecefefb0f0941da8d1c771dd876ea88ec636acf09d04e88b8b772411019d4e9f",
	                    "EndpointID": "ef8c5a035f526ece98584f2b4e128d03177684e2120a9131b4a4b02b9b2a0f64",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-216078",
	                        "4027f4b33bb4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-216078 -n old-k8s-version-216078
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-216078 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-216078 logs -n 25: (2.858493374s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-045829                              | cert-expiration-045829   | jenkins | v1.33.1 | 18 Aug 24 19:24 UTC | 18 Aug 24 19:24 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-467857                               | force-systemd-env-467857 | jenkins | v1.33.1 | 18 Aug 24 19:24 UTC | 18 Aug 24 19:24 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-467857                            | force-systemd-env-467857 | jenkins | v1.33.1 | 18 Aug 24 19:24 UTC | 18 Aug 24 19:24 UTC |
	| start   | -p cert-options-562344                                 | cert-options-562344      | jenkins | v1.33.1 | 18 Aug 24 19:24 UTC | 18 Aug 24 19:25 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-562344 ssh                                | cert-options-562344      | jenkins | v1.33.1 | 18 Aug 24 19:25 UTC | 18 Aug 24 19:25 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-562344 -- sudo                         | cert-options-562344      | jenkins | v1.33.1 | 18 Aug 24 19:25 UTC | 18 Aug 24 19:25 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-562344                                 | cert-options-562344      | jenkins | v1.33.1 | 18 Aug 24 19:25 UTC | 18 Aug 24 19:25 UTC |
	| start   | -p old-k8s-version-216078                              | old-k8s-version-216078   | jenkins | v1.33.1 | 18 Aug 24 19:25 UTC | 18 Aug 24 19:27 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-045829                              | cert-expiration-045829   | jenkins | v1.33.1 | 18 Aug 24 19:27 UTC | 18 Aug 24 19:28 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-045829                              | cert-expiration-045829   | jenkins | v1.33.1 | 18 Aug 24 19:28 UTC | 18 Aug 24 19:28 UTC |
	| addons  | enable metrics-server -p old-k8s-version-216078        | old-k8s-version-216078   | jenkins | v1.33.1 | 18 Aug 24 19:28 UTC | 18 Aug 24 19:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-216078                              | old-k8s-version-216078   | jenkins | v1.33.1 | 18 Aug 24 19:28 UTC | 18 Aug 24 19:28 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| start   | -p no-preload-091348                                   | no-preload-091348        | jenkins | v1.33.1 | 18 Aug 24 19:28 UTC | 18 Aug 24 19:29 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-216078             | old-k8s-version-216078   | jenkins | v1.33.1 | 18 Aug 24 19:28 UTC | 18 Aug 24 19:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-216078                              | old-k8s-version-216078   | jenkins | v1.33.1 | 18 Aug 24 19:28 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-091348             | no-preload-091348        | jenkins | v1.33.1 | 18 Aug 24 19:29 UTC | 18 Aug 24 19:29 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-091348                                   | no-preload-091348        | jenkins | v1.33.1 | 18 Aug 24 19:29 UTC | 18 Aug 24 19:29 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-091348                  | no-preload-091348        | jenkins | v1.33.1 | 18 Aug 24 19:29 UTC | 18 Aug 24 19:29 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-091348                                   | no-preload-091348        | jenkins | v1.33.1 | 18 Aug 24 19:29 UTC | 18 Aug 24 19:34 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	| image   | no-preload-091348 image list                           | no-preload-091348        | jenkins | v1.33.1 | 18 Aug 24 19:34 UTC | 18 Aug 24 19:34 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-091348                                   | no-preload-091348        | jenkins | v1.33.1 | 18 Aug 24 19:34 UTC | 18 Aug 24 19:34 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-091348                                   | no-preload-091348        | jenkins | v1.33.1 | 18 Aug 24 19:34 UTC | 18 Aug 24 19:34 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-091348                                   | no-preload-091348        | jenkins | v1.33.1 | 18 Aug 24 19:34 UTC | 18 Aug 24 19:34 UTC |
	| delete  | -p no-preload-091348                                   | no-preload-091348        | jenkins | v1.33.1 | 18 Aug 24 19:34 UTC | 18 Aug 24 19:34 UTC |
	| start   | -p embed-certs-568075                                  | embed-certs-568075       | jenkins | v1.33.1 | 18 Aug 24 19:34 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 19:34:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 19:34:18.832528  376647 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:34:18.832660  376647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:34:18.832670  376647 out.go:358] Setting ErrFile to fd 2...
	I0818 19:34:18.832676  376647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:34:18.832937  376647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
	I0818 19:34:18.833346  376647 out.go:352] Setting JSON to false
	I0818 19:34:18.834364  376647 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":101803,"bootTime":1723907856,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0818 19:34:18.834440  376647 start.go:139] virtualization:  
	I0818 19:34:18.837321  376647 out.go:177] * [embed-certs-568075] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0818 19:34:18.840342  376647 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:34:18.840591  376647 notify.go:220] Checking for updates...
	I0818 19:34:18.845265  376647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:34:18.847003  376647 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	I0818 19:34:18.848846  376647 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	I0818 19:34:18.850914  376647 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0818 19:34:18.852853  376647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:34:18.855777  376647 config.go:182] Loaded profile config "old-k8s-version-216078": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0818 19:34:18.855929  376647 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:34:18.889488  376647 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0818 19:34:18.889618  376647 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 19:34:18.960327  376647 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-18 19:34:18.949655128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 19:34:18.960448  376647 docker.go:307] overlay module found
	I0818 19:34:18.962717  376647 out.go:177] * Using the docker driver based on user configuration
	I0818 19:34:18.964523  376647 start.go:297] selected driver: docker
	I0818 19:34:18.964548  376647 start.go:901] validating driver "docker" against <nil>
	I0818 19:34:18.964562  376647 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:34:18.965279  376647 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 19:34:19.023395  376647 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-08-18 19:34:19.013448219 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 19:34:19.023584  376647 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 19:34:19.023866  376647 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 19:34:19.025861  376647 out.go:177] * Using Docker driver with root privileges
	I0818 19:34:19.027969  376647 cni.go:84] Creating CNI manager for ""
	I0818 19:34:19.027994  376647 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0818 19:34:19.028011  376647 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0818 19:34:19.028107  376647 start.go:340] cluster config:
	{Name:embed-certs-568075 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-568075 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 19:34:19.029970  376647 out.go:177] * Starting "embed-certs-568075" primary control-plane node in "embed-certs-568075" cluster
	I0818 19:34:19.031860  376647 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0818 19:34:19.033360  376647 out.go:177] * Pulling base image v0.0.44-1723740748-19452 ...
	I0818 19:34:19.035317  376647 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0818 19:34:19.035379  376647 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0818 19:34:19.035390  376647 cache.go:56] Caching tarball of preloaded images
	I0818 19:34:19.035398  376647 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0818 19:34:19.035474  376647 preload.go:172] Found /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0818 19:34:19.035485  376647 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0818 19:34:19.035592  376647 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/embed-certs-568075/config.json ...
	I0818 19:34:19.035614  376647 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/embed-certs-568075/config.json: {Name:mk682197b18358175a74e249f8d7f83acb1daf7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0818 19:34:19.054758  376647 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d is of wrong architecture
	I0818 19:34:19.054782  376647 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0818 19:34:19.054852  376647 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0818 19:34:19.054875  376647 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0818 19:34:19.054885  376647 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0818 19:34:19.054893  376647 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0818 19:34:19.054902  376647 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from local cache
	I0818 19:34:19.189978  376647 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d from cached tarball
	I0818 19:34:19.190022  376647 cache.go:194] Successfully downloaded all kic artifacts
	I0818 19:34:19.190059  376647 start.go:360] acquireMachinesLock for embed-certs-568075: {Name:mkc9849c557d3f200fc5aa91646d695933a9517a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 19:34:19.190983  376647 start.go:364] duration metric: took 897.465µs to acquireMachinesLock for "embed-certs-568075"
	I0818 19:34:19.191034  376647 start.go:93] Provisioning new machine with config: &{Name:embed-certs-568075 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:embed-certs-568075 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0818 19:34:19.191124  376647 start.go:125] createHost starting for "" (driver="docker")
	I0818 19:34:16.332699  365394 cri.go:89] found id: "03fbfc746eafcd2d037579ee9352ca8a230e2d686179a89c453eeae19ba4b178"
	I0818 19:34:16.332722  365394 cri.go:89] found id: "dac35e1e30e3e5a45ee7083d88bb89388a2e328ba629d84ddaa0317c718721e1"
	I0818 19:34:16.332727  365394 cri.go:89] found id: ""
	I0818 19:34:16.332749  365394 logs.go:276] 2 containers: [03fbfc746eafcd2d037579ee9352ca8a230e2d686179a89c453eeae19ba4b178 dac35e1e30e3e5a45ee7083d88bb89388a2e328ba629d84ddaa0317c718721e1]
	I0818 19:34:16.332805  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.338994  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.347143  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0818 19:34:16.347323  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0818 19:34:16.414776  365394 cri.go:89] found id: "9cb4a0a17af824f66e9d214beb9b26be48e63a6e4f9896caf75d310dbc8cd195"
	I0818 19:34:16.414802  365394 cri.go:89] found id: "f7dac504cb498237ca5f7b6a524527cefe7917920a3d413a7c4cd970c819a894"
	I0818 19:34:16.414812  365394 cri.go:89] found id: ""
	I0818 19:34:16.414823  365394 logs.go:276] 2 containers: [9cb4a0a17af824f66e9d214beb9b26be48e63a6e4f9896caf75d310dbc8cd195 f7dac504cb498237ca5f7b6a524527cefe7917920a3d413a7c4cd970c819a894]
	I0818 19:34:16.414925  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.422222  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.428687  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0818 19:34:16.428787  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0818 19:34:16.509373  365394 cri.go:89] found id: "3f55d380c20ce8abf819eb90f1107f99b6883a3af7d8b0c875d12639b6f5a1a4"
	I0818 19:34:16.509418  365394 cri.go:89] found id: "c1f327db25afffcb7632d994277e73b15d44e051118ecff71b21299a38bcbe10"
	I0818 19:34:16.509429  365394 cri.go:89] found id: ""
	I0818 19:34:16.509438  365394 logs.go:276] 2 containers: [3f55d380c20ce8abf819eb90f1107f99b6883a3af7d8b0c875d12639b6f5a1a4 c1f327db25afffcb7632d994277e73b15d44e051118ecff71b21299a38bcbe10]
	I0818 19:34:16.509519  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.515307  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.519719  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0818 19:34:16.519907  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0818 19:34:16.598251  365394 cri.go:89] found id: "d9c85aaaa043f149f3f6daad7c932a643b6619f4c4cb7755b152310052480f6f"
	I0818 19:34:16.598273  365394 cri.go:89] found id: "905b265420907d5b4b8ee43b00a698a67557ffce65488b60828117b20fde26b1"
	I0818 19:34:16.598278  365394 cri.go:89] found id: ""
	I0818 19:34:16.598289  365394 logs.go:276] 2 containers: [d9c85aaaa043f149f3f6daad7c932a643b6619f4c4cb7755b152310052480f6f 905b265420907d5b4b8ee43b00a698a67557ffce65488b60828117b20fde26b1]
	I0818 19:34:16.598352  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.605307  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.611188  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0818 19:34:16.611266  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0818 19:34:16.674927  365394 cri.go:89] found id: "6b77034b70c13c794e7e71994e90063cf36dd1fa6cbe8d1784b7ceee707c9aca"
	I0818 19:34:16.674951  365394 cri.go:89] found id: "00fecac5b274921d65e34f858a2d0fb18402e84618ee482b50df18739b0495ad"
	I0818 19:34:16.674957  365394 cri.go:89] found id: ""
	I0818 19:34:16.674965  365394 logs.go:276] 2 containers: [6b77034b70c13c794e7e71994e90063cf36dd1fa6cbe8d1784b7ceee707c9aca 00fecac5b274921d65e34f858a2d0fb18402e84618ee482b50df18739b0495ad]
	I0818 19:34:16.675024  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.679400  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.682802  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0818 19:34:16.682868  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0818 19:34:16.728600  365394 cri.go:89] found id: "136b3774b3686f0bcd42b41a696212f7023654f7c9a799d192302697dae329d1"
	I0818 19:34:16.728621  365394 cri.go:89] found id: "80ee1e729513b71685fa053c5499a375152679a2ba2012d6cc5008ba2faab62c"
	I0818 19:34:16.728626  365394 cri.go:89] found id: ""
	I0818 19:34:16.728633  365394 logs.go:276] 2 containers: [136b3774b3686f0bcd42b41a696212f7023654f7c9a799d192302697dae329d1 80ee1e729513b71685fa053c5499a375152679a2ba2012d6cc5008ba2faab62c]
	I0818 19:34:16.728692  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.732280  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.735563  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0818 19:34:16.735678  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0818 19:34:16.776231  365394 cri.go:89] found id: "35e10341a264471c2a4ba4fc06d275735b1230d24e7fa77bea16c30257e91109"
	I0818 19:34:16.776254  365394 cri.go:89] found id: "48b1e06eab89513ac1b65a73dd1d7880d6028031dc94ee400b4ceb77169245aa"
	I0818 19:34:16.776259  365394 cri.go:89] found id: ""
	I0818 19:34:16.776267  365394 logs.go:276] 2 containers: [35e10341a264471c2a4ba4fc06d275735b1230d24e7fa77bea16c30257e91109 48b1e06eab89513ac1b65a73dd1d7880d6028031dc94ee400b4ceb77169245aa]
	I0818 19:34:16.776324  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.779901  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.783067  365394 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0818 19:34:16.783149  365394 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0818 19:34:16.825413  365394 cri.go:89] found id: "aea462f9a5614878af08a599d3ec3ce490ef7ff8d286df3bad62c54b80d55394"
	I0818 19:34:16.825435  365394 cri.go:89] found id: ""
	I0818 19:34:16.825444  365394 logs.go:276] 1 containers: [aea462f9a5614878af08a599d3ec3ce490ef7ff8d286df3bad62c54b80d55394]
	I0818 19:34:16.825499  365394 ssh_runner.go:195] Run: which crictl
	I0818 19:34:16.829000  365394 logs.go:123] Gathering logs for dmesg ...
	I0818 19:34:16.829067  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0818 19:34:16.845161  365394 logs.go:123] Gathering logs for etcd [03fbfc746eafcd2d037579ee9352ca8a230e2d686179a89c453eeae19ba4b178] ...
	I0818 19:34:16.845193  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 03fbfc746eafcd2d037579ee9352ca8a230e2d686179a89c453eeae19ba4b178"
	I0818 19:34:16.899783  365394 logs.go:123] Gathering logs for kube-scheduler [3f55d380c20ce8abf819eb90f1107f99b6883a3af7d8b0c875d12639b6f5a1a4] ...
	I0818 19:34:16.899919  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f55d380c20ce8abf819eb90f1107f99b6883a3af7d8b0c875d12639b6f5a1a4"
	I0818 19:34:16.941128  365394 logs.go:123] Gathering logs for kube-controller-manager [00fecac5b274921d65e34f858a2d0fb18402e84618ee482b50df18739b0495ad] ...
	I0818 19:34:16.941168  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 00fecac5b274921d65e34f858a2d0fb18402e84618ee482b50df18739b0495ad"
	I0818 19:34:17.016362  365394 logs.go:123] Gathering logs for storage-provisioner [35e10341a264471c2a4ba4fc06d275735b1230d24e7fa77bea16c30257e91109] ...
	I0818 19:34:17.016400  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 35e10341a264471c2a4ba4fc06d275735b1230d24e7fa77bea16c30257e91109"
	I0818 19:34:17.071278  365394 logs.go:123] Gathering logs for kubernetes-dashboard [aea462f9a5614878af08a599d3ec3ce490ef7ff8d286df3bad62c54b80d55394] ...
	I0818 19:34:17.071318  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aea462f9a5614878af08a599d3ec3ce490ef7ff8d286df3bad62c54b80d55394"
	I0818 19:34:17.148503  365394 logs.go:123] Gathering logs for kube-proxy [d9c85aaaa043f149f3f6daad7c932a643b6619f4c4cb7755b152310052480f6f] ...
	I0818 19:34:17.148774  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d9c85aaaa043f149f3f6daad7c932a643b6619f4c4cb7755b152310052480f6f"
	I0818 19:34:17.219704  365394 logs.go:123] Gathering logs for kube-proxy [905b265420907d5b4b8ee43b00a698a67557ffce65488b60828117b20fde26b1] ...
	I0818 19:34:17.219728  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 905b265420907d5b4b8ee43b00a698a67557ffce65488b60828117b20fde26b1"
	I0818 19:34:17.276392  365394 logs.go:123] Gathering logs for kube-controller-manager [6b77034b70c13c794e7e71994e90063cf36dd1fa6cbe8d1784b7ceee707c9aca] ...
	I0818 19:34:17.276421  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6b77034b70c13c794e7e71994e90063cf36dd1fa6cbe8d1784b7ceee707c9aca"
	I0818 19:34:17.403147  365394 logs.go:123] Gathering logs for storage-provisioner [48b1e06eab89513ac1b65a73dd1d7880d6028031dc94ee400b4ceb77169245aa] ...
	I0818 19:34:17.403183  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48b1e06eab89513ac1b65a73dd1d7880d6028031dc94ee400b4ceb77169245aa"
	I0818 19:34:17.450187  365394 logs.go:123] Gathering logs for containerd ...
	I0818 19:34:17.450216  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0818 19:34:17.528357  365394 logs.go:123] Gathering logs for container status ...
	I0818 19:34:17.528397  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0818 19:34:17.619646  365394 logs.go:123] Gathering logs for kubelet ...
	I0818 19:34:17.619677  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0818 19:34:17.682312  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.116436     659 reflector.go:138] object-"kube-system"/"kube-proxy-token-gkhsq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-gkhsq" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.682589  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.117472     659 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.682950  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.130723     659 reflector.go:138] object-"kube-system"/"storage-provisioner-token-jg9hx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-jg9hx" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.683233  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.130814     659 reflector.go:138] object-"default"/"default-token-79mlg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-79mlg" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.683613  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.137493     659 reflector.go:138] object-"kube-system"/"metrics-server-token-lkhr7": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-lkhr7" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.683926  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.137601     659 reflector.go:138] object-"kube-system"/"kindnet-token-pj6jq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-pj6jq" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.684195  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.137667     659 reflector.go:138] object-"kube-system"/"coredns-token-tstz9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-tstz9" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.684437  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:46 old-k8s-version-216078 kubelet[659]: E0818 19:28:46.137736     659 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-216078" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-216078' and this object
	W0818 19:34:17.694259  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:49 old-k8s-version-216078 kubelet[659]: E0818 19:28:49.150627     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0818 19:34:17.694481  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:49 old-k8s-version-216078 kubelet[659]: E0818 19:28:49.197339     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.698078  365394 logs.go:138] Found kubelet problem: Aug 18 19:28:59 old-k8s-version-216078 kubelet[659]: E0818 19:28:59.777587     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0818 19:34:17.700642  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:11 old-k8s-version-216078 kubelet[659]: E0818 19:29:11.292167     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.701410  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:12 old-k8s-version-216078 kubelet[659]: E0818 19:29:12.336846     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.701625  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:14 old-k8s-version-216078 kubelet[659]: E0818 19:29:14.740520     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.702008  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:17 old-k8s-version-216078 kubelet[659]: E0818 19:29:17.681494     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.702525  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:20 old-k8s-version-216078 kubelet[659]: E0818 19:29:20.366319     659 pod_workers.go:191] Error syncing pod b224ab7a-edb7-4ab4-9f31-ddafa7172d46 ("storage-provisioner_kube-system(b224ab7a-edb7-4ab4-9f31-ddafa7172d46)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b224ab7a-edb7-4ab4-9f31-ddafa7172d46)"
	W0818 19:34:17.705773  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:27 old-k8s-version-216078 kubelet[659]: E0818 19:29:27.748257     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0818 19:34:17.706438  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:30 old-k8s-version-216078 kubelet[659]: E0818 19:29:30.417852     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.706945  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:37 old-k8s-version-216078 kubelet[659]: E0818 19:29:37.681274     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.707162  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:42 old-k8s-version-216078 kubelet[659]: E0818 19:29:42.772972     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.707534  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:49 old-k8s-version-216078 kubelet[659]: E0818 19:29:49.740124     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.707746  365394 logs.go:138] Found kubelet problem: Aug 18 19:29:56 old-k8s-version-216078 kubelet[659]: E0818 19:29:56.740480     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.708431  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:01 old-k8s-version-216078 kubelet[659]: E0818 19:30:01.513179     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.708780  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:07 old-k8s-version-216078 kubelet[659]: E0818 19:30:07.681775     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.711748  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:10 old-k8s-version-216078 kubelet[659]: E0818 19:30:10.748586     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0818 19:34:17.712177  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:21 old-k8s-version-216078 kubelet[659]: E0818 19:30:21.740932     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.712387  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:25 old-k8s-version-216078 kubelet[659]: E0818 19:30:25.740889     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.712748  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:35 old-k8s-version-216078 kubelet[659]: E0818 19:30:35.740311     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.712948  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:37 old-k8s-version-216078 kubelet[659]: E0818 19:30:37.740695     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.713322  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:50 old-k8s-version-216078 kubelet[659]: E0818 19:30:50.741003     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.713829  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:51 old-k8s-version-216078 kubelet[659]: E0818 19:30:51.639933     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.714214  365394 logs.go:138] Found kubelet problem: Aug 18 19:30:57 old-k8s-version-216078 kubelet[659]: E0818 19:30:57.681246     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.714413  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:05 old-k8s-version-216078 kubelet[659]: E0818 19:31:05.740483     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.714780  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:10 old-k8s-version-216078 kubelet[659]: E0818 19:31:10.740145     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.714995  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:18 old-k8s-version-216078 kubelet[659]: E0818 19:31:18.740514     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.715393  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:23 old-k8s-version-216078 kubelet[659]: E0818 19:31:23.744229     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.715611  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:29 old-k8s-version-216078 kubelet[659]: E0818 19:31:29.741416     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.716019  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:38 old-k8s-version-216078 kubelet[659]: E0818 19:31:38.740103     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.720670  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:40 old-k8s-version-216078 kubelet[659]: E0818 19:31:40.747496     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0818 19:34:17.721103  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:52 old-k8s-version-216078 kubelet[659]: E0818 19:31:52.740064     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.721320  365394 logs.go:138] Found kubelet problem: Aug 18 19:31:52 old-k8s-version-216078 kubelet[659]: E0818 19:31:52.741572     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.721722  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:04 old-k8s-version-216078 kubelet[659]: E0818 19:32:04.740152     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.721938  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:05 old-k8s-version-216078 kubelet[659]: E0818 19:32:05.740613     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.722165  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:16 old-k8s-version-216078 kubelet[659]: E0818 19:32:16.740992     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.722842  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:17 old-k8s-version-216078 kubelet[659]: E0818 19:32:17.876392     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.723219  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:27 old-k8s-version-216078 kubelet[659]: E0818 19:32:27.681615     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.723422  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:31 old-k8s-version-216078 kubelet[659]: E0818 19:32:31.743158     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.723815  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:42 old-k8s-version-216078 kubelet[659]: E0818 19:32:42.740105     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.724033  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:45 old-k8s-version-216078 kubelet[659]: E0818 19:32:45.740578     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.724417  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:57 old-k8s-version-216078 kubelet[659]: E0818 19:32:57.740499     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.724633  365394 logs.go:138] Found kubelet problem: Aug 18 19:32:58 old-k8s-version-216078 kubelet[659]: E0818 19:32:58.740474     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.725032  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:10 old-k8s-version-216078 kubelet[659]: E0818 19:33:10.740134     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.725268  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:12 old-k8s-version-216078 kubelet[659]: E0818 19:33:12.740523     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.725663  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:21 old-k8s-version-216078 kubelet[659]: E0818 19:33:21.744175     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.726002  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:26 old-k8s-version-216078 kubelet[659]: E0818 19:33:26.742186     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.726377  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:33 old-k8s-version-216078 kubelet[659]: E0818 19:33:33.745221     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.726576  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:37 old-k8s-version-216078 kubelet[659]: E0818 19:33:37.740667     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.726949  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:46 old-k8s-version-216078 kubelet[659]: E0818 19:33:46.740184     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.727179  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:48 old-k8s-version-216078 kubelet[659]: E0818 19:33:48.741619     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.727551  365394 logs.go:138] Found kubelet problem: Aug 18 19:33:59 old-k8s-version-216078 kubelet[659]: E0818 19:33:59.740109     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.727776  365394 logs.go:138] Found kubelet problem: Aug 18 19:34:02 old-k8s-version-216078 kubelet[659]: E0818 19:34:02.740456     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:17.728248  365394 logs.go:138] Found kubelet problem: Aug 18 19:34:10 old-k8s-version-216078 kubelet[659]: E0818 19:34:10.740182     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:17.728718  365394 logs.go:138] Found kubelet problem: Aug 18 19:34:15 old-k8s-version-216078 kubelet[659]: E0818 19:34:15.741175     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0818 19:34:17.728740  365394 logs.go:123] Gathering logs for kube-apiserver [a80fdd47f7301370c07df42778e346c5b3f9a5082205cddd92243b2292907316] ...
	I0818 19:34:17.728755  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a80fdd47f7301370c07df42778e346c5b3f9a5082205cddd92243b2292907316"
	I0818 19:34:17.819506  365394 logs.go:123] Gathering logs for etcd [dac35e1e30e3e5a45ee7083d88bb89388a2e328ba629d84ddaa0317c718721e1] ...
	I0818 19:34:17.819607  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dac35e1e30e3e5a45ee7083d88bb89388a2e328ba629d84ddaa0317c718721e1"
	I0818 19:34:17.873771  365394 logs.go:123] Gathering logs for coredns [9cb4a0a17af824f66e9d214beb9b26be48e63a6e4f9896caf75d310dbc8cd195] ...
	I0818 19:34:17.873809  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cb4a0a17af824f66e9d214beb9b26be48e63a6e4f9896caf75d310dbc8cd195"
	I0818 19:34:17.943754  365394 logs.go:123] Gathering logs for coredns [f7dac504cb498237ca5f7b6a524527cefe7917920a3d413a7c4cd970c819a894] ...
	I0818 19:34:17.943828  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7dac504cb498237ca5f7b6a524527cefe7917920a3d413a7c4cd970c819a894"
	I0818 19:34:17.999633  365394 logs.go:123] Gathering logs for kindnet [136b3774b3686f0bcd42b41a696212f7023654f7c9a799d192302697dae329d1] ...
	I0818 19:34:17.999662  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 136b3774b3686f0bcd42b41a696212f7023654f7c9a799d192302697dae329d1"
	I0818 19:34:18.083570  365394 logs.go:123] Gathering logs for describe nodes ...
	I0818 19:34:18.083614  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0818 19:34:18.296468  365394 logs.go:123] Gathering logs for kube-apiserver [a70df47e0a4c904f4234489c4b6931a0c40e23791ebbe701bdf79b173c708054] ...
	I0818 19:34:18.296516  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a70df47e0a4c904f4234489c4b6931a0c40e23791ebbe701bdf79b173c708054"
	I0818 19:34:18.408946  365394 logs.go:123] Gathering logs for kube-scheduler [c1f327db25afffcb7632d994277e73b15d44e051118ecff71b21299a38bcbe10] ...
	I0818 19:34:18.408985  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1f327db25afffcb7632d994277e73b15d44e051118ecff71b21299a38bcbe10"
	I0818 19:34:18.565143  365394 logs.go:123] Gathering logs for kindnet [80ee1e729513b71685fa053c5499a375152679a2ba2012d6cc5008ba2faab62c] ...
	I0818 19:34:18.565179  365394 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 80ee1e729513b71685fa053c5499a375152679a2ba2012d6cc5008ba2faab62c"
	I0818 19:34:18.658398  365394 out.go:358] Setting ErrFile to fd 2...
	I0818 19:34:18.658433  365394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0818 19:34:18.658675  365394 out.go:270] X Problems detected in kubelet:
	W0818 19:34:18.658761  365394 out.go:270]   Aug 18 19:33:48 old-k8s-version-216078 kubelet[659]: E0818 19:33:48.741619     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:18.658792  365394 out.go:270]   Aug 18 19:33:59 old-k8s-version-216078 kubelet[659]: E0818 19:33:59.740109     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:18.658809  365394 out.go:270]   Aug 18 19:34:02 old-k8s-version-216078 kubelet[659]: E0818 19:34:02.740456     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0818 19:34:18.658836  365394 out.go:270]   Aug 18 19:34:10 old-k8s-version-216078 kubelet[659]: E0818 19:34:10.740182     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	W0818 19:34:18.658848  365394 out.go:270]   Aug 18 19:34:15 old-k8s-version-216078 kubelet[659]: E0818 19:34:15.741175     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0818 19:34:18.658864  365394 out.go:358] Setting ErrFile to fd 2...
	I0818 19:34:18.658873  365394 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:34:19.194097  376647 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0818 19:34:19.194353  376647 start.go:159] libmachine.API.Create for "embed-certs-568075" (driver="docker")
	I0818 19:34:19.194392  376647 client.go:168] LocalClient.Create starting
	I0818 19:34:19.194479  376647 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem
	I0818 19:34:19.194522  376647 main.go:141] libmachine: Decoding PEM data...
	I0818 19:34:19.194542  376647 main.go:141] libmachine: Parsing certificate...
	I0818 19:34:19.194599  376647 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19423-154159/.minikube/certs/cert.pem
	I0818 19:34:19.194620  376647 main.go:141] libmachine: Decoding PEM data...
	I0818 19:34:19.194630  376647 main.go:141] libmachine: Parsing certificate...
	I0818 19:34:19.195017  376647 cli_runner.go:164] Run: docker network inspect embed-certs-568075 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0818 19:34:19.210772  376647 cli_runner.go:211] docker network inspect embed-certs-568075 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0818 19:34:19.210847  376647 network_create.go:284] running [docker network inspect embed-certs-568075] to gather additional debugging logs...
	I0818 19:34:19.210864  376647 cli_runner.go:164] Run: docker network inspect embed-certs-568075
	W0818 19:34:19.235088  376647 cli_runner.go:211] docker network inspect embed-certs-568075 returned with exit code 1
	I0818 19:34:19.235120  376647 network_create.go:287] error running [docker network inspect embed-certs-568075]: docker network inspect embed-certs-568075: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-568075 not found
	I0818 19:34:19.235134  376647 network_create.go:289] output of [docker network inspect embed-certs-568075]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-568075 not found
	
	** /stderr **
	I0818 19:34:19.235238  376647 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0818 19:34:19.251430  376647 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c2352e65d8eb IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:84:67:35:66} reservation:<nil>}
	I0818 19:34:19.251914  376647 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-8c8d5c217e77 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:64:96:86:e3} reservation:<nil>}
	I0818 19:34:19.252359  376647 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-06b8887a5149 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:66:a4:e2:23} reservation:<nil>}
	I0818 19:34:19.252751  376647 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-ecefefb0f094 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:79:13:5e:c5} reservation:<nil>}
	I0818 19:34:19.253376  376647 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400187d0e0}
	I0818 19:34:19.253441  376647 network_create.go:124] attempt to create docker network embed-certs-568075 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0818 19:34:19.253536  376647 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-568075 embed-certs-568075
	I0818 19:34:19.328644  376647 network_create.go:108] docker network embed-certs-568075 192.168.85.0/24 created
	I0818 19:34:19.328678  376647 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-568075" container
	I0818 19:34:19.328775  376647 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0818 19:34:19.344570  376647 cli_runner.go:164] Run: docker volume create embed-certs-568075 --label name.minikube.sigs.k8s.io=embed-certs-568075 --label created_by.minikube.sigs.k8s.io=true
	I0818 19:34:19.361491  376647 oci.go:103] Successfully created a docker volume embed-certs-568075
	I0818 19:34:19.361580  376647 cli_runner.go:164] Run: docker run --rm --name embed-certs-568075-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-568075 --entrypoint /usr/bin/test -v embed-certs-568075:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -d /var/lib
	I0818 19:34:20.051961  376647 oci.go:107] Successfully prepared a docker volume embed-certs-568075
	I0818 19:34:20.052029  376647 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0818 19:34:20.052060  376647 kic.go:194] Starting extracting preloaded images to volume ...
	I0818 19:34:20.052151  376647 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-568075:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir
	I0818 19:34:28.659657  365394 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:34:28.677401  365394 api_server.go:72] duration metric: took 6m4.945994591s to wait for apiserver process to appear ...
	I0818 19:34:28.677425  365394 api_server.go:88] waiting for apiserver healthz status ...
	I0818 19:34:28.679981  365394 out.go:201] 
	W0818 19:34:28.681909  365394 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0818 19:34:28.681926  365394 out.go:270] * 
	W0818 19:34:28.682790  365394 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0818 19:34:28.684398  365394 out.go:201] 
	I0818 19:34:25.135041  376647 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-568075:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d -I lz4 -xf /preloaded.tar -C /extractDir: (5.082839168s)
	I0818 19:34:25.135077  376647 kic.go:203] duration metric: took 5.083012722s to extract preloaded images to volume ...
	W0818 19:34:25.135228  376647 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0818 19:34:25.135343  376647 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0818 19:34:25.193878  376647 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-568075 --name embed-certs-568075 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-568075 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-568075 --network embed-certs-568075 --ip 192.168.85.2 --volume embed-certs-568075:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d
	I0818 19:34:25.533643  376647 cli_runner.go:164] Run: docker container inspect embed-certs-568075 --format={{.State.Running}}
	I0818 19:34:25.560630  376647 cli_runner.go:164] Run: docker container inspect embed-certs-568075 --format={{.State.Status}}
	I0818 19:34:25.581316  376647 cli_runner.go:164] Run: docker exec embed-certs-568075 stat /var/lib/dpkg/alternatives/iptables
	I0818 19:34:25.638700  376647 oci.go:144] the created container "embed-certs-568075" has a running status.
	I0818 19:34:25.638734  376647 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19423-154159/.minikube/machines/embed-certs-568075/id_rsa...
	I0818 19:34:26.164166  376647 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19423-154159/.minikube/machines/embed-certs-568075/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0818 19:34:26.191066  376647 cli_runner.go:164] Run: docker container inspect embed-certs-568075 --format={{.State.Status}}
	I0818 19:34:26.232079  376647 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0818 19:34:26.232116  376647 kic_runner.go:114] Args: [docker exec --privileged embed-certs-568075 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0818 19:34:26.322443  376647 cli_runner.go:164] Run: docker container inspect embed-certs-568075 --format={{.State.Status}}
	I0818 19:34:26.346380  376647 machine.go:93] provisionDockerMachine start ...
	I0818 19:34:26.346482  376647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-568075
	I0818 19:34:26.369606  376647 main.go:141] libmachine: Using SSH client type: native
	I0818 19:34:26.369882  376647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38624 <nil> <nil>}
	I0818 19:34:26.369897  376647 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 19:34:26.542275  376647 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-568075
	
	I0818 19:34:26.542318  376647 ubuntu.go:169] provisioning hostname "embed-certs-568075"
	I0818 19:34:26.542394  376647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-568075
	I0818 19:34:26.566719  376647 main.go:141] libmachine: Using SSH client type: native
	I0818 19:34:26.566976  376647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38624 <nil> <nil>}
	I0818 19:34:26.566991  376647 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-568075 && echo "embed-certs-568075" | sudo tee /etc/hostname
	I0818 19:34:26.723271  376647 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-568075
	
	I0818 19:34:26.723380  376647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-568075
	I0818 19:34:26.761932  376647 main.go:141] libmachine: Using SSH client type: native
	I0818 19:34:26.762235  376647 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 38624 <nil> <nil>}
	I0818 19:34:26.762259  376647 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-568075' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-568075/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-568075' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 19:34:26.899881  376647 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 19:34:26.899911  376647 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19423-154159/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-154159/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-154159/.minikube}
	I0818 19:34:26.899940  376647 ubuntu.go:177] setting up certificates
	I0818 19:34:26.899950  376647 provision.go:84] configureAuth start
	I0818 19:34:26.900023  376647 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-568075
	I0818 19:34:26.924238  376647 provision.go:143] copyHostCerts
	I0818 19:34:26.924297  376647 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-154159/.minikube/ca.pem, removing ...
	I0818 19:34:26.924307  376647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-154159/.minikube/ca.pem
	I0818 19:34:26.924385  376647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-154159/.minikube/ca.pem (1082 bytes)
	I0818 19:34:26.924477  376647 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-154159/.minikube/cert.pem, removing ...
	I0818 19:34:26.924482  376647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-154159/.minikube/cert.pem
	I0818 19:34:26.924513  376647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-154159/.minikube/cert.pem (1123 bytes)
	I0818 19:34:26.924564  376647 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-154159/.minikube/key.pem, removing ...
	I0818 19:34:26.924569  376647 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-154159/.minikube/key.pem
	I0818 19:34:26.924590  376647 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-154159/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-154159/.minikube/key.pem (1675 bytes)
	I0818 19:34:26.924803  376647 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-154159/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca-key.pem org=jenkins.embed-certs-568075 san=[127.0.0.1 192.168.85.2 embed-certs-568075 localhost minikube]
	I0818 19:34:27.535542  376647 provision.go:177] copyRemoteCerts
	I0818 19:34:27.535620  376647 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 19:34:27.535665  376647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-568075
	I0818 19:34:27.558893  376647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38624 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/embed-certs-568075/id_rsa Username:docker}
	I0818 19:34:27.652776  376647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0818 19:34:27.677498  376647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0818 19:34:27.701893  376647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0818 19:34:27.726129  376647 provision.go:87] duration metric: took 826.161156ms to configureAuth
	I0818 19:34:27.726157  376647 ubuntu.go:193] setting minikube options for container-runtime
	I0818 19:34:27.726343  376647 config.go:182] Loaded profile config "embed-certs-568075": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0818 19:34:27.726356  376647 machine.go:96] duration metric: took 1.379950388s to provisionDockerMachine
	I0818 19:34:27.726364  376647 client.go:171] duration metric: took 8.531961394s to LocalClient.Create
	I0818 19:34:27.726381  376647 start.go:167] duration metric: took 8.53202957s to libmachine.API.Create "embed-certs-568075"
	I0818 19:34:27.726396  376647 start.go:293] postStartSetup for "embed-certs-568075" (driver="docker")
	I0818 19:34:27.726407  376647 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 19:34:27.726461  376647 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 19:34:27.726509  376647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-568075
	I0818 19:34:27.750024  376647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38624 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/embed-certs-568075/id_rsa Username:docker}
	I0818 19:34:27.849358  376647 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 19:34:27.852663  376647 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0818 19:34:27.852708  376647 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0818 19:34:27.852722  376647 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0818 19:34:27.852733  376647 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0818 19:34:27.852743  376647 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-154159/.minikube/addons for local assets ...
	I0818 19:34:27.852807  376647 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-154159/.minikube/files for local assets ...
	I0818 19:34:27.852893  376647 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-154159/.minikube/files/etc/ssl/certs/1595492.pem -> 1595492.pem in /etc/ssl/certs
	I0818 19:34:27.852999  376647 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0818 19:34:27.862149  376647 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-154159/.minikube/files/etc/ssl/certs/1595492.pem --> /etc/ssl/certs/1595492.pem (1708 bytes)
	I0818 19:34:27.890178  376647 start.go:296] duration metric: took 163.765374ms for postStartSetup
	I0818 19:34:27.890547  376647 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-568075
	I0818 19:34:27.907152  376647 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/embed-certs-568075/config.json ...
	I0818 19:34:27.907566  376647 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:34:27.907627  376647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-568075
	I0818 19:34:27.924459  376647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38624 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/embed-certs-568075/id_rsa Username:docker}
	I0818 19:34:28.015255  376647 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0818 19:34:28.020592  376647 start.go:128] duration metric: took 8.829450914s to createHost
	I0818 19:34:28.020665  376647 start.go:83] releasing machines lock for "embed-certs-568075", held for 8.829660431s
	I0818 19:34:28.020775  376647 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-568075
	I0818 19:34:28.038536  376647 ssh_runner.go:195] Run: cat /version.json
	I0818 19:34:28.038594  376647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-568075
	I0818 19:34:28.038593  376647 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 19:34:28.038728  376647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-568075
	I0818 19:34:28.060419  376647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38624 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/embed-certs-568075/id_rsa Username:docker}
	I0818 19:34:28.073721  376647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38624 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/embed-certs-568075/id_rsa Username:docker}
	I0818 19:34:28.278122  376647 ssh_runner.go:195] Run: systemctl --version
	I0818 19:34:28.282696  376647 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0818 19:34:28.287407  376647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0818 19:34:28.317548  376647 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0818 19:34:28.317635  376647 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 19:34:28.348301  376647 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0818 19:34:28.348332  376647 start.go:495] detecting cgroup driver to use...
	I0818 19:34:28.348366  376647 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0818 19:34:28.348415  376647 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0818 19:34:28.361019  376647 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 19:34:28.372914  376647 docker.go:217] disabling cri-docker service (if available) ...
	I0818 19:34:28.373001  376647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0818 19:34:28.387525  376647 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0818 19:34:28.403850  376647 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0818 19:34:28.488775  376647 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0818 19:34:28.608468  376647 docker.go:233] disabling docker service ...
	I0818 19:34:28.608601  376647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0818 19:34:28.659711  376647 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0818 19:34:28.674416  376647 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	3b6d2b1b4ae9a       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   b1b3f127eaec2       dashboard-metrics-scraper-8d5bb5db8-shnnv
	35e10341a2644       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   00a3fa24cc162       storage-provisioner
	aea462f9a5614       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   d0c5c58c15196       kubernetes-dashboard-cd95d586-fmtrv
	9cb4a0a17af82       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   58ccf093e9043       coredns-74ff55c5b-vvtgz
	d9c85aaaa043f       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   bbc99c8e00507       kube-proxy-n9glb
	4929b781535fb       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   1e1850220e011       busybox
	48b1e06eab895       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   00a3fa24cc162       storage-provisioner
	136b3774b3686       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   8480a3a5d659e       kindnet-mbxls
	a70df47e0a4c9       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   e4f2705cb2add       kube-apiserver-old-k8s-version-216078
	03fbfc746eafc       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   b0bb457b89f22       etcd-old-k8s-version-216078
	6b77034b70c13       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   44bd951e4089d       kube-controller-manager-old-k8s-version-216078
	3f55d380c20ce       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   a2fbca7deeaf2       kube-scheduler-old-k8s-version-216078
	d1bac58a8da9c       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   200388e8854ea       busybox
	f7dac504cb498       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   2c9efb1c45f69       coredns-74ff55c5b-vvtgz
	80ee1e729513b       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   f8ffb39a715af       kindnet-mbxls
	905b265420907       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   c3d046b4a4c01       kube-proxy-n9glb
	c1f327db25aff       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   40dc09dd25a8c       kube-scheduler-old-k8s-version-216078
	dac35e1e30e3e       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   8b77af9a5bd6b       etcd-old-k8s-version-216078
	00fecac5b2749       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   74c3576c02b95       kube-controller-manager-old-k8s-version-216078
	a80fdd47f7301       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   bb844606ba7a1       kube-apiserver-old-k8s-version-216078
	
	
	==> containerd <==
	Aug 18 19:30:50 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:30:50.760257444Z" level=info msg="CreateContainer within sandbox \"b1b3f127eaec2053dc974a2d56b2af6fd92c0d8b3e7c46097bc42787f8ca8682\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"f04e6e02d7fa8356180971c3afc1d6efb4a91bb0892d59fbd091ae730e74fba7\""
	Aug 18 19:30:50 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:30:50.760854455Z" level=info msg="StartContainer for \"f04e6e02d7fa8356180971c3afc1d6efb4a91bb0892d59fbd091ae730e74fba7\""
	Aug 18 19:30:50 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:30:50.836043210Z" level=info msg="StartContainer for \"f04e6e02d7fa8356180971c3afc1d6efb4a91bb0892d59fbd091ae730e74fba7\" returns successfully"
	Aug 18 19:30:50 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:30:50.864684576Z" level=info msg="shim disconnected" id=f04e6e02d7fa8356180971c3afc1d6efb4a91bb0892d59fbd091ae730e74fba7 namespace=k8s.io
	Aug 18 19:30:50 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:30:50.864746196Z" level=warning msg="cleaning up after shim disconnected" id=f04e6e02d7fa8356180971c3afc1d6efb4a91bb0892d59fbd091ae730e74fba7 namespace=k8s.io
	Aug 18 19:30:50 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:30:50.864757347Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 18 19:30:51 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:30:51.643105652Z" level=info msg="RemoveContainer for \"9ef12fb84dcc5f6a6eb29583f4ae47f23d3e3a9e2ebe8af984f2f5f3e4635569\""
	Aug 18 19:30:51 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:30:51.662897239Z" level=info msg="RemoveContainer for \"9ef12fb84dcc5f6a6eb29583f4ae47f23d3e3a9e2ebe8af984f2f5f3e4635569\" returns successfully"
	Aug 18 19:31:40 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:31:40.741042623Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 18 19:31:40 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:31:40.745645658Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Aug 18 19:31:40 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:31:40.747056934Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 18 19:31:40 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:31:40.747088983Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Aug 18 19:32:17 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:32:17.742602167Z" level=info msg="CreateContainer within sandbox \"b1b3f127eaec2053dc974a2d56b2af6fd92c0d8b3e7c46097bc42787f8ca8682\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Aug 18 19:32:17 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:32:17.760417418Z" level=info msg="CreateContainer within sandbox \"b1b3f127eaec2053dc974a2d56b2af6fd92c0d8b3e7c46097bc42787f8ca8682\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"3b6d2b1b4ae9a754515adb79d3fa5549aad0d5f9d5b9082c3a443b893020e7d2\""
	Aug 18 19:32:17 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:32:17.760931574Z" level=info msg="StartContainer for \"3b6d2b1b4ae9a754515adb79d3fa5549aad0d5f9d5b9082c3a443b893020e7d2\""
	Aug 18 19:32:17 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:32:17.826475058Z" level=info msg="StartContainer for \"3b6d2b1b4ae9a754515adb79d3fa5549aad0d5f9d5b9082c3a443b893020e7d2\" returns successfully"
	Aug 18 19:32:17 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:32:17.852140206Z" level=info msg="shim disconnected" id=3b6d2b1b4ae9a754515adb79d3fa5549aad0d5f9d5b9082c3a443b893020e7d2 namespace=k8s.io
	Aug 18 19:32:17 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:32:17.852201900Z" level=warning msg="cleaning up after shim disconnected" id=3b6d2b1b4ae9a754515adb79d3fa5549aad0d5f9d5b9082c3a443b893020e7d2 namespace=k8s.io
	Aug 18 19:32:17 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:32:17.852214117Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 18 19:32:17 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:32:17.878033398Z" level=info msg="RemoveContainer for \"f04e6e02d7fa8356180971c3afc1d6efb4a91bb0892d59fbd091ae730e74fba7\""
	Aug 18 19:32:17 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:32:17.899234777Z" level=info msg="RemoveContainer for \"f04e6e02d7fa8356180971c3afc1d6efb4a91bb0892d59fbd091ae730e74fba7\" returns successfully"
	Aug 18 19:34:26 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:34:26.744364488Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 18 19:34:26 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:34:26.752916291Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Aug 18 19:34:26 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:34:26.756643251Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Aug 18 19:34:26 old-k8s-version-216078 containerd[569]: time="2024-08-18T19:34:26.757593122Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [9cb4a0a17af824f66e9d214beb9b26be48e63a6e4f9896caf75d310dbc8cd195] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:42596 - 61776 "HINFO IN 1862080754105022271.5888583688692755596. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011439832s
	
	
	==> coredns [f7dac504cb498237ca5f7b6a524527cefe7917920a3d413a7c4cd970c819a894] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:58444 - 44724 "HINFO IN 212848429246740913.5217799900105664813. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.019464638s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-216078
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-216078
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=old-k8s-version-216078
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T19_26_08_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 19:26:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-216078
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 19:34:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 18 Aug 2024 19:29:39 +0000   Sun, 18 Aug 2024 19:25:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 18 Aug 2024 19:29:39 +0000   Sun, 18 Aug 2024 19:25:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 18 Aug 2024 19:29:39 +0000   Sun, 18 Aug 2024 19:25:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 18 Aug 2024 19:29:39 +0000   Sun, 18 Aug 2024 19:26:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-216078
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99166311889b4661b085793883fca85b
	  System UUID:                12b18352-3445-4fa7-8e5f-c567cc3538f4
	  Boot ID:                    46f0c01d-aaa8-472d-87c6-dade3bb189f7
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 coredns-74ff55c5b-vvtgz                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m7s
	  kube-system                 etcd-old-k8s-version-216078                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m14s
	  kube-system                 kindnet-mbxls                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m7s
	  kube-system                 kube-apiserver-old-k8s-version-216078             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-controller-manager-old-k8s-version-216078    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-proxy-n9glb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 kube-scheduler-old-k8s-version-216078             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 metrics-server-9975d5f86-5k9lw                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m27s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m6s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-shnnv         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-fmtrv               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m34s (x4 over 8m34s)  kubelet     Node old-k8s-version-216078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s (x3 over 8m34s)  kubelet     Node old-k8s-version-216078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s (x4 over 8m34s)  kubelet     Node old-k8s-version-216078 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m15s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m15s                  kubelet     Node old-k8s-version-216078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m15s                  kubelet     Node old-k8s-version-216078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m15s                  kubelet     Node old-k8s-version-216078 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m15s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m7s                   kubelet     Node old-k8s-version-216078 status is now: NodeReady
	  Normal  Starting                 8m6s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m59s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-216078 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m59s (x7 over 5m59s)  kubelet     Node old-k8s-version-216078 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x8 over 5m59s)  kubelet     Node old-k8s-version-216078 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m59s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m41s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [03fbfc746eafcd2d037579ee9352ca8a230e2d686179a89c453eeae19ba4b178] <==
	2024-08-18 19:30:21.639240 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:30:31.639129 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:30:41.639126 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:30:51.639672 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:31:01.639426 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:31:11.639006 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:31:21.639298 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:31:31.641291 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:31:41.639129 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:31:51.639102 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:32:01.639124 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:32:11.639099 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:32:21.639216 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:32:31.639212 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:32:41.639979 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:32:51.639108 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:33:01.639071 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:33:11.639063 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:33:21.639153 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:33:31.639168 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:33:41.639035 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:33:51.639001 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:34:01.639096 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:34:11.639098 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:34:21.639068 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [dac35e1e30e3e5a45ee7083d88bb89388a2e328ba629d84ddaa0317c718721e1] <==
	raft2024/08/18 19:25:58 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/08/18 19:25:58 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/08/18 19:25:58 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/08/18 19:25:58 INFO: ea7e25599daad906 became leader at term 2
	raft2024/08/18 19:25:58 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-08-18 19:25:58.223871 I | etcdserver: setting up the initial cluster version to 3.4
	2024-08-18 19:25:58.227576 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-08-18 19:25:58.227804 I | etcdserver/api: enabled capabilities for version 3.4
	2024-08-18 19:25:58.227922 I | etcdserver: published {Name:old-k8s-version-216078 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-08-18 19:25:58.228147 I | embed: ready to serve client requests
	2024-08-18 19:25:58.229908 I | embed: serving client requests on 192.168.76.2:2379
	2024-08-18 19:25:58.234024 I | embed: ready to serve client requests
	2024-08-18 19:25:58.235613 I | embed: serving client requests on 127.0.0.1:2379
	2024-08-18 19:26:18.397418 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:26:22.081963 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:26:32.082030 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:26:42.082250 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:26:52.082038 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:27:02.082094 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:27:12.082017 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:27:22.082012 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:27:32.082043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:27:42.082291 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:27:52.082223 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-08-18 19:28:02.085345 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 19:34:30 up 1 day,  4:16,  0 users,  load average: 1.31, 1.92, 2.43
	Linux old-k8s-version-216078 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [136b3774b3686f0bcd42b41a696212f7023654f7c9a799d192302697dae329d1] <==
	I0818 19:33:09.533638       1 main.go:299] handling current node
	I0818 19:33:19.533952       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:33:19.533989       1 main.go:299] handling current node
	I0818 19:33:29.534171       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:33:29.534206       1 main.go:299] handling current node
	W0818 19:33:30.394047       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 19:33:30.394083       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	W0818 19:33:38.194787       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0818 19:33:38.194822       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0818 19:33:39.534239       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:33:39.534275       1 main.go:299] handling current node
	I0818 19:33:49.535285       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:33:49.535552       1 main.go:299] handling current node
	I0818 19:33:59.535045       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:33:59.535104       1 main.go:299] handling current node
	W0818 19:34:05.677260       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0818 19:34:05.677302       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0818 19:34:09.534236       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:34:09.534283       1 main.go:299] handling current node
	W0818 19:34:11.006580       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 19:34:11.006649       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0818 19:34:19.534387       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:34:19.534605       1 main.go:299] handling current node
	I0818 19:34:29.533612       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:34:29.533648       1 main.go:299] handling current node
	
	
	==> kindnet [80ee1e729513b71685fa053c5499a375152679a2ba2012d6cc5008ba2faab62c] <==
	I0818 19:26:56.552101       1 main.go:299] handling current node
	W0818 19:27:02.324712       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0818 19:27:02.324752       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0818 19:27:04.728615       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0818 19:27:04.728657       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0818 19:27:06.551947       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:27:06.552050       1 main.go:299] handling current node
	W0818 19:27:07.602526       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 19:27:07.602637       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0818 19:27:16.552197       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:27:16.552269       1 main.go:299] handling current node
	I0818 19:27:26.552582       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:27:26.552618       1 main.go:299] handling current node
	I0818 19:27:36.552458       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:27:36.552505       1 main.go:299] handling current node
	W0818 19:27:37.187720       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0818 19:27:37.187753       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0818 19:27:46.551560       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:27:46.551594       1 main.go:299] handling current node
	W0818 19:27:46.810582       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0818 19:27:46.810617       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0818 19:27:47.886744       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0818 19:27:47.886781       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0818 19:27:56.552331       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0818 19:27:56.552365       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a70df47e0a4c904f4234489c4b6931a0c40e23791ebbe701bdf79b173c708054] <==
	I0818 19:30:49.733499       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0818 19:30:49.733678       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0818 19:31:32.484364       1 client.go:360] parsed scheme: "passthrough"
	I0818 19:31:32.484418       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0818 19:31:32.484428       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0818 19:31:49.734013       1 handler_proxy.go:102] no RequestInfo found in the context
	E0818 19:31:49.734090       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0818 19:31:49.734106       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 19:32:10.698817       1 client.go:360] parsed scheme: "passthrough"
	I0818 19:32:10.698863       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0818 19:32:10.699015       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0818 19:32:49.380346       1 client.go:360] parsed scheme: "passthrough"
	I0818 19:32:49.380399       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0818 19:32:49.380435       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0818 19:33:25.741758       1 client.go:360] parsed scheme: "passthrough"
	I0818 19:33:25.741973       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0818 19:33:25.741993       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0818 19:33:46.920931       1 handler_proxy.go:102] no RequestInfo found in the context
	E0818 19:33:46.921065       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0818 19:33:46.921108       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0818 19:34:08.227932       1 client.go:360] parsed scheme: "passthrough"
	I0818 19:34:08.227976       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0818 19:34:08.227987       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [a80fdd47f7301370c07df42778e346c5b3f9a5082205cddd92243b2292907316] <==
	I0818 19:26:05.090538       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0818 19:26:05.090576       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0818 19:26:05.098628       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0818 19:26:05.104209       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0818 19:26:05.104235       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0818 19:26:05.631417       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0818 19:26:05.679970       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0818 19:26:05.737963       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0818 19:26:05.739122       1 controller.go:606] quota admission added evaluator for: endpoints
	I0818 19:26:05.743247       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0818 19:26:06.787005       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0818 19:26:07.247890       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0818 19:26:07.335153       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0818 19:26:15.673948       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0818 19:26:23.024745       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0818 19:26:23.118755       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0818 19:26:39.071265       1 client.go:360] parsed scheme: "passthrough"
	I0818 19:26:39.071512       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0818 19:26:39.071531       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0818 19:27:15.683422       1 client.go:360] parsed scheme: "passthrough"
	I0818 19:27:15.683464       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0818 19:27:15.683473       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0818 19:28:00.059887       1 client.go:360] parsed scheme: "passthrough"
	I0818 19:28:00.059942       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0818 19:28:00.059952       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [00fecac5b274921d65e34f858a2d0fb18402e84618ee482b50df18739b0495ad] <==
	I0818 19:26:23.176757       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0818 19:26:23.176858       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-216078. Assuming now as a timestamp.
	I0818 19:26:23.176949       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0818 19:26:23.177122       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0818 19:26:23.178333       1 event.go:291] "Event occurred" object="old-k8s-version-216078" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-216078 event: Registered Node old-k8s-version-216078 in Controller"
	I0818 19:26:23.204014       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-d6z7k"
	I0818 19:26:23.224578       1 shared_informer.go:247] Caches are synced for attach detach 
	I0818 19:26:23.246256       1 shared_informer.go:247] Caches are synced for namespace 
	I0818 19:26:23.246382       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-n9glb"
	I0818 19:26:23.250975       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-216078" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0818 19:26:23.251059       1 shared_informer.go:247] Caches are synced for resource quota 
	I0818 19:26:23.267412       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-vvtgz"
	I0818 19:26:23.278784       1 shared_informer.go:247] Caches are synced for disruption 
	I0818 19:26:23.278812       1 disruption.go:339] Sending events to api server.
	I0818 19:26:23.281545       1 shared_informer.go:247] Caches are synced for resource quota 
	I0818 19:26:23.301752       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mbxls"
	I0818 19:26:23.432677       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0818 19:26:23.681602       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0818 19:26:23.681623       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0818 19:26:23.735025       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0818 19:26:24.868252       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0818 19:26:24.938738       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-d6z7k"
	I0818 19:26:28.177200       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0818 19:28:02.375497       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0818 19:28:02.543003       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [6b77034b70c13c794e7e71994e90063cf36dd1fa6cbe8d1784b7ceee707c9aca] <==
	E0818 19:30:05.767168       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0818 19:30:11.470636       1 request.go:655] Throttling request took 1.048279983s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0818 19:30:12.322112       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0818 19:30:36.269003       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0818 19:30:43.972498       1 request.go:655] Throttling request took 1.048099269s, request: GET:https://192.168.76.2:8443/apis/node.k8s.io/v1beta1?timeout=32s
	W0818 19:30:44.824070       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0818 19:31:06.770800       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0818 19:31:16.474537       1 request.go:655] Throttling request took 1.04839404s, request: GET:https://192.168.76.2:8443/apis/batch/v1beta1?timeout=32s
	W0818 19:31:17.325961       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0818 19:31:37.273086       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0818 19:31:48.976554       1 request.go:655] Throttling request took 1.048436151s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0818 19:31:49.827893       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0818 19:32:07.800593       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0818 19:32:21.478314       1 request.go:655] Throttling request took 1.047704629s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0818 19:32:22.329772       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0818 19:32:38.302529       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0818 19:32:53.980296       1 request.go:655] Throttling request took 1.048489662s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0818 19:32:54.831747       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0818 19:33:08.804274       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0818 19:33:26.482102       1 request.go:655] Throttling request took 1.048266535s, request: GET:https://192.168.76.2:8443/apis/autoscaling/v2beta2?timeout=32s
	W0818 19:33:27.333594       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0818 19:33:39.306054       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0818 19:33:58.984078       1 request.go:655] Throttling request took 1.048535065s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
	W0818 19:33:59.835462       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0818 19:34:09.807995       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [905b265420907d5b4b8ee43b00a698a67557ffce65488b60828117b20fde26b1] <==
	I0818 19:26:24.250347       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0818 19:26:24.250436       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0818 19:26:24.315950       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0818 19:26:24.316055       1 server_others.go:185] Using iptables Proxier.
	I0818 19:26:24.316265       1 server.go:650] Version: v1.20.0
	I0818 19:26:24.328423       1 config.go:315] Starting service config controller
	I0818 19:26:24.328446       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0818 19:26:24.328472       1 config.go:224] Starting endpoint slice config controller
	I0818 19:26:24.328484       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0818 19:26:24.428571       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0818 19:26:24.428644       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [d9c85aaaa043f149f3f6daad7c932a643b6619f4c4cb7755b152310052480f6f] <==
	I0818 19:28:49.407693       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0818 19:28:49.407764       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0818 19:28:49.469882       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0818 19:28:49.470181       1 server_others.go:185] Using iptables Proxier.
	I0818 19:28:49.470547       1 server.go:650] Version: v1.20.0
	I0818 19:28:49.471442       1 config.go:315] Starting service config controller
	I0818 19:28:49.471513       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0818 19:28:49.471556       1 config.go:224] Starting endpoint slice config controller
	I0818 19:28:49.471595       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0818 19:28:49.571653       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0818 19:28:49.571735       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [3f55d380c20ce8abf819eb90f1107f99b6883a3af7d8b0c875d12639b6f5a1a4] <==
	I0818 19:28:37.907562       1 serving.go:331] Generated self-signed cert in-memory
	W0818 19:28:45.754863       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0818 19:28:45.754904       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0818 19:28:45.754918       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 19:28:45.754924       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 19:28:46.044587       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0818 19:28:46.063429       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:28:46.063447       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:28:46.063476       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0818 19:28:46.263595       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [c1f327db25afffcb7632d994277e73b15d44e051118ecff71b21299a38bcbe10] <==
	I0818 19:25:59.239399       1 serving.go:331] Generated self-signed cert in-memory
	W0818 19:26:04.242176       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0818 19:26:04.242222       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0818 19:26:04.242232       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 19:26:04.242237       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 19:26:04.354895       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0818 19:26:04.354981       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:26:04.354988       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 19:26:04.355009       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0818 19:26:04.384212       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0818 19:26:04.384454       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0818 19:26:04.384573       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0818 19:26:04.384751       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0818 19:26:04.384897       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0818 19:26:04.385063       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0818 19:26:04.385184       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0818 19:26:04.385296       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0818 19:26:04.385378       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0818 19:26:04.385486       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0818 19:26:04.385581       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0818 19:26:04.398189       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0818 19:26:05.312440       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0818 19:26:06.055187       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Aug 18 19:32:58 old-k8s-version-216078 kubelet[659]: E0818 19:32:58.740474     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 18 19:33:10 old-k8s-version-216078 kubelet[659]: I0818 19:33:10.739733     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3b6d2b1b4ae9a754515adb79d3fa5549aad0d5f9d5b9082c3a443b893020e7d2
	Aug 18 19:33:10 old-k8s-version-216078 kubelet[659]: E0818 19:33:10.740134     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	Aug 18 19:33:12 old-k8s-version-216078 kubelet[659]: E0818 19:33:12.740523     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 18 19:33:21 old-k8s-version-216078 kubelet[659]: I0818 19:33:21.743845     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3b6d2b1b4ae9a754515adb79d3fa5549aad0d5f9d5b9082c3a443b893020e7d2
	Aug 18 19:33:21 old-k8s-version-216078 kubelet[659]: E0818 19:33:21.744175     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	Aug 18 19:33:26 old-k8s-version-216078 kubelet[659]: E0818 19:33:26.742186     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 18 19:33:33 old-k8s-version-216078 kubelet[659]: I0818 19:33:33.744484     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3b6d2b1b4ae9a754515adb79d3fa5549aad0d5f9d5b9082c3a443b893020e7d2
	Aug 18 19:33:33 old-k8s-version-216078 kubelet[659]: E0818 19:33:33.745221     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	Aug 18 19:33:37 old-k8s-version-216078 kubelet[659]: E0818 19:33:37.740667     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 18 19:33:46 old-k8s-version-216078 kubelet[659]: I0818 19:33:46.739742     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3b6d2b1b4ae9a754515adb79d3fa5549aad0d5f9d5b9082c3a443b893020e7d2
	Aug 18 19:33:46 old-k8s-version-216078 kubelet[659]: E0818 19:33:46.740184     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	Aug 18 19:33:48 old-k8s-version-216078 kubelet[659]: E0818 19:33:48.741619     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 18 19:33:59 old-k8s-version-216078 kubelet[659]: I0818 19:33:59.739726     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3b6d2b1b4ae9a754515adb79d3fa5549aad0d5f9d5b9082c3a443b893020e7d2
	Aug 18 19:33:59 old-k8s-version-216078 kubelet[659]: E0818 19:33:59.740109     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	Aug 18 19:34:02 old-k8s-version-216078 kubelet[659]: E0818 19:34:02.740456     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 18 19:34:10 old-k8s-version-216078 kubelet[659]: I0818 19:34:10.739780     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3b6d2b1b4ae9a754515adb79d3fa5549aad0d5f9d5b9082c3a443b893020e7d2
	Aug 18 19:34:10 old-k8s-version-216078 kubelet[659]: E0818 19:34:10.740182     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	Aug 18 19:34:15 old-k8s-version-216078 kubelet[659]: E0818 19:34:15.741175     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Aug 18 19:34:25 old-k8s-version-216078 kubelet[659]: I0818 19:34:25.740319     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 3b6d2b1b4ae9a754515adb79d3fa5549aad0d5f9d5b9082c3a443b893020e7d2
	Aug 18 19:34:25 old-k8s-version-216078 kubelet[659]: E0818 19:34:25.740655     659 pod_workers.go:191] Error syncing pod 8c3c11f9-c5cc-46fd-af21-f01b7f474e46 ("dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-shnnv_kubernetes-dashboard(8c3c11f9-c5cc-46fd-af21-f01b7f474e46)"
	Aug 18 19:34:26 old-k8s-version-216078 kubelet[659]: E0818 19:34:26.758120     659 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 18 19:34:26 old-k8s-version-216078 kubelet[659]: E0818 19:34:26.758716     659 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 18 19:34:26 old-k8s-version-216078 kubelet[659]: E0818 19:34:26.759143     659 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-lkhr7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-5k9lw_kube-system(40ad329
9-8676-4f1f-a314-c139b58a3e50): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Aug 18 19:34:26 old-k8s-version-216078 kubelet[659]: E0818 19:34:26.759399     659 pod_workers.go:191] Error syncing pod 40ad3299-8676-4f1f-a314-c139b58a3e50 ("metrics-server-9975d5f86-5k9lw_kube-system(40ad3299-8676-4f1f-a314-c139b58a3e50)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> kubernetes-dashboard [aea462f9a5614878af08a599d3ec3ce490ef7ff8d286df3bad62c54b80d55394] <==
	2024/08/18 19:29:13 Using namespace: kubernetes-dashboard
	2024/08/18 19:29:13 Using in-cluster config to connect to apiserver
	2024/08/18 19:29:13 Using secret token for csrf signing
	2024/08/18 19:29:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/08/18 19:29:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/08/18 19:29:13 Successful initial request to the apiserver, version: v1.20.0
	2024/08/18 19:29:13 Generating JWE encryption key
	2024/08/18 19:29:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/08/18 19:29:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/08/18 19:29:14 Initializing JWE encryption key from synchronized object
	2024/08/18 19:29:14 Creating in-cluster Sidecar client
	2024/08/18 19:29:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/18 19:29:14 Serving insecurely on HTTP port: 9090
	2024/08/18 19:29:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/18 19:30:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/18 19:30:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/18 19:31:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/18 19:31:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/18 19:32:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/18 19:32:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/18 19:33:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/18 19:33:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/18 19:34:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/08/18 19:29:13 Starting overwatch
	
	
	==> storage-provisioner [35e10341a264471c2a4ba4fc06d275735b1230d24e7fa77bea16c30257e91109] <==
	I0818 19:29:31.880969       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0818 19:29:31.902654       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0818 19:29:31.902701       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0818 19:29:49.380506       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0818 19:29:49.383515       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-216078_2a36c6e8-8439-4cae-a5d7-b900f95315e1!
	I0818 19:29:49.388312       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e392ce86-1657-4460-a535-f84a7aaeafb9", APIVersion:"v1", ResourceVersion:"839", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-216078_2a36c6e8-8439-4cae-a5d7-b900f95315e1 became leader
	I0818 19:29:49.486344       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-216078_2a36c6e8-8439-4cae-a5d7-b900f95315e1!
	
	
	==> storage-provisioner [48b1e06eab89513ac1b65a73dd1d7880d6028031dc94ee400b4ceb77169245aa] <==
	I0818 19:28:49.007807       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0818 19:29:19.014481       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-216078 -n old-k8s-version-216078
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-216078 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-5k9lw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-216078 describe pod metrics-server-9975d5f86-5k9lw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-216078 describe pod metrics-server-9975d5f86-5k9lw: exit status 1 (105.970258ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-5k9lw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-216078 describe pod metrics-server-9975d5f86-5k9lw: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (376.68s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.04
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 5.19
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.07
18 TestDownloadOnly/v1.31.0/DeleteAll 0.2
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 215.54
31 TestAddons/serial/GCPAuth/Namespaces 0.2
33 TestAddons/parallel/Registry 14.2
34 TestAddons/parallel/Ingress 19.5
35 TestAddons/parallel/InspektorGadget 11.96
36 TestAddons/parallel/MetricsServer 6.88
39 TestAddons/parallel/CSI 35.77
40 TestAddons/parallel/Headlamp 17.74
41 TestAddons/parallel/CloudSpanner 6.57
42 TestAddons/parallel/LocalPath 51.78
43 TestAddons/parallel/NvidiaDevicePlugin 6.67
44 TestAddons/parallel/Yakd 12.03
45 TestAddons/StoppedEnableDisable 12.34
46 TestCertOptions 38.82
47 TestCertExpiration 230.02
49 TestForceSystemdFlag 37.69
50 TestForceSystemdEnv 39.47
51 TestDockerEnvContainerd 43.95
56 TestErrorSpam/setup 33.11
57 TestErrorSpam/start 0.76
58 TestErrorSpam/status 1.09
59 TestErrorSpam/pause 1.79
60 TestErrorSpam/unpause 1.85
61 TestErrorSpam/stop 1.47
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 51.67
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 5.99
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.11
73 TestFunctional/serial/CacheCmd/cache/add_local 1.56
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.04
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 47.24
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.71
84 TestFunctional/serial/LogsFileCmd 1.69
85 TestFunctional/serial/InvalidService 4.49
87 TestFunctional/parallel/ConfigCmd 0.43
88 TestFunctional/parallel/DashboardCmd 9.16
89 TestFunctional/parallel/DryRun 0.5
90 TestFunctional/parallel/InternationalLanguage 0.19
91 TestFunctional/parallel/StatusCmd 1.22
95 TestFunctional/parallel/ServiceCmdConnect 11.69
96 TestFunctional/parallel/AddonsCmd 0.15
97 TestFunctional/parallel/PersistentVolumeClaim 25.28
99 TestFunctional/parallel/SSHCmd 0.76
100 TestFunctional/parallel/CpCmd 2.16
102 TestFunctional/parallel/FileSync 0.35
103 TestFunctional/parallel/CertSync 2.18
107 TestFunctional/parallel/NodeLabels 0.1
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.72
111 TestFunctional/parallel/License 0.48
112 TestFunctional/parallel/Version/short 0.11
113 TestFunctional/parallel/Version/components 1.26
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.99
119 TestFunctional/parallel/ImageCommands/Setup 0.74
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.31
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.72
127 TestFunctional/parallel/ProfileCmd/profile_list 0.48
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
131 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.63
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.75
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.43
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ServiceCmd/DeployApp 6.26
145 TestFunctional/parallel/ServiceCmd/List 0.53
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.65
147 TestFunctional/parallel/MountCmd/any-port 7.65
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.47
149 TestFunctional/parallel/ServiceCmd/Format 0.37
150 TestFunctional/parallel/ServiceCmd/URL 0.54
151 TestFunctional/parallel/MountCmd/specific-port 2.56
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.37
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 120.18
160 TestMultiControlPlane/serial/DeployApp 31.18
161 TestMultiControlPlane/serial/PingHostFromPods 1.7
162 TestMultiControlPlane/serial/AddWorkerNode 23.85
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.78
165 TestMultiControlPlane/serial/CopyFile 19.2
166 TestMultiControlPlane/serial/StopSecondaryNode 12.89
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.23
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.83
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 140.84
171 TestMultiControlPlane/serial/DeleteSecondaryNode 9.54
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.56
173 TestMultiControlPlane/serial/StopCluster 36.02
174 TestMultiControlPlane/serial/RestartCluster 68.1
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.56
176 TestMultiControlPlane/serial/AddSecondaryNode 50.35
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.74
181 TestJSONOutput/start/Command 56.04
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.73
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.67
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.77
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
206 TestKicCustomNetwork/create_custom_network 40.72
207 TestKicCustomNetwork/use_default_bridge_network 32.64
208 TestKicExistingNetwork 32.29
209 TestKicCustomSubnet 36.08
210 TestKicStaticIP 38.98
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 65.98
215 TestMountStart/serial/StartWithMountFirst 6.35
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 6.15
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.61
220 TestMountStart/serial/VerifyMountPostDelete 0.25
221 TestMountStart/serial/Stop 1.19
222 TestMountStart/serial/RestartStopped 7.55
223 TestMountStart/serial/VerifyMountPostStop 0.25
226 TestMultiNode/serial/FreshStart2Nodes 83.09
227 TestMultiNode/serial/DeployApp2Nodes 15.42
228 TestMultiNode/serial/PingHostFrom2Pods 1.02
229 TestMultiNode/serial/AddNode 18.72
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.35
232 TestMultiNode/serial/CopyFile 10.16
233 TestMultiNode/serial/StopNode 2.25
234 TestMultiNode/serial/StartAfterStop 9.58
235 TestMultiNode/serial/RestartKeepsNodes 91.43
236 TestMultiNode/serial/DeleteNode 5.66
237 TestMultiNode/serial/StopMultiNode 24.23
238 TestMultiNode/serial/RestartMultiNode 50.95
239 TestMultiNode/serial/ValidateNameConflict 34.71
244 TestPreload 116.72
246 TestScheduledStopUnix 106.01
249 TestInsufficientStorage 10.11
250 TestRunningBinaryUpgrade 90.2
252 TestKubernetesUpgrade 347.79
253 TestMissingContainerUpgrade 170.43
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
256 TestNoKubernetes/serial/StartWithK8s 39.45
257 TestNoKubernetes/serial/StartWithStopK8s 18.05
258 TestNoKubernetes/serial/Start 5.84
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
260 TestNoKubernetes/serial/ProfileList 1.09
261 TestNoKubernetes/serial/Stop 1.26
262 TestNoKubernetes/serial/StartNoArgs 6.87
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
264 TestStoppedBinaryUpgrade/Setup 0.69
265 TestStoppedBinaryUpgrade/Upgrade 108.95
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
275 TestPause/serial/Start 65.79
276 TestPause/serial/SecondStartNoReconfiguration 7.46
277 TestPause/serial/Pause 1.11
278 TestPause/serial/VerifyStatus 0.49
279 TestPause/serial/Unpause 0.98
280 TestPause/serial/PauseAgain 1.19
281 TestPause/serial/DeletePaused 2.99
282 TestPause/serial/VerifyDeletedResources 0.47
290 TestNetworkPlugins/group/false 4.86
295 TestStartStop/group/old-k8s-version/serial/FirstStart 151.78
296 TestStartStop/group/old-k8s-version/serial/DeployApp 7.78
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.4
298 TestStartStop/group/old-k8s-version/serial/Stop 13.12
300 TestStartStop/group/no-preload/serial/FirstStart 68.28
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
303 TestStartStop/group/no-preload/serial/DeployApp 8.45
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.13
305 TestStartStop/group/no-preload/serial/Stop 12.11
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/no-preload/serial/SecondStart 266.72
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
311 TestStartStop/group/no-preload/serial/Pause 3.26
313 TestStartStop/group/embed-certs/serial/FirstStart 55.31
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.17
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.39
317 TestStartStop/group/old-k8s-version/serial/Pause 4.35
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.78
320 TestStartStop/group/embed-certs/serial/DeployApp 9.47
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.56
322 TestStartStop/group/embed-certs/serial/Stop 12.32
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
324 TestStartStop/group/embed-certs/serial/SecondStart 302.13
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.5
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.46
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.48
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.32
330 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
334 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
335 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.59
336 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.31
337 TestStartStop/group/embed-certs/serial/Pause 4.44
339 TestStartStop/group/newest-cni/serial/FirstStart 43.1
340 TestNetworkPlugins/group/auto/Start 72.18
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.76
343 TestStartStop/group/newest-cni/serial/Stop 1.39
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.31
345 TestStartStop/group/newest-cni/serial/SecondStart 16.17
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.38
349 TestStartStop/group/newest-cni/serial/Pause 3.08
350 TestNetworkPlugins/group/kindnet/Start 58.09
351 TestNetworkPlugins/group/auto/KubeletFlags 0.6
352 TestNetworkPlugins/group/auto/NetCatPod 10.57
353 TestNetworkPlugins/group/auto/DNS 0.24
354 TestNetworkPlugins/group/auto/Localhost 0.21
355 TestNetworkPlugins/group/auto/HairPin 0.19
356 TestNetworkPlugins/group/calico/Start 68.41
357 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
358 TestNetworkPlugins/group/kindnet/KubeletFlags 0.41
359 TestNetworkPlugins/group/kindnet/NetCatPod 11.45
360 TestNetworkPlugins/group/kindnet/DNS 0.19
361 TestNetworkPlugins/group/kindnet/Localhost 0.2
362 TestNetworkPlugins/group/kindnet/HairPin 0.17
363 TestNetworkPlugins/group/custom-flannel/Start 52.93
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.36
366 TestNetworkPlugins/group/calico/NetCatPod 10.33
367 TestNetworkPlugins/group/calico/DNS 0.29
368 TestNetworkPlugins/group/calico/Localhost 0.24
369 TestNetworkPlugins/group/calico/HairPin 0.15
370 TestNetworkPlugins/group/enable-default-cni/Start 46.47
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.39
373 TestNetworkPlugins/group/custom-flannel/DNS 0.25
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
376 TestNetworkPlugins/group/flannel/Start 55.15
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.27
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
382 TestNetworkPlugins/group/bridge/Start 46.84
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
385 TestNetworkPlugins/group/flannel/NetCatPod 12.37
386 TestNetworkPlugins/group/flannel/DNS 0.21
387 TestNetworkPlugins/group/flannel/Localhost 0.15
388 TestNetworkPlugins/group/flannel/HairPin 0.23
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
390 TestNetworkPlugins/group/bridge/NetCatPod 10.38
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (7.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-591709 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-591709 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.037540005s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-591709
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-591709: exit status 85 (69.116104ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-591709 | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC |          |
	|         | -p download-only-591709        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 18:38:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 18:38:04.916608  159554 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:38:04.916792  159554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:38:04.916823  159554 out.go:358] Setting ErrFile to fd 2...
	I0818 18:38:04.916845  159554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:38:04.917098  159554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
	W0818 18:38:04.917262  159554 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19423-154159/.minikube/config/config.json: open /home/jenkins/minikube-integration/19423-154159/.minikube/config/config.json: no such file or directory
	I0818 18:38:04.917710  159554 out.go:352] Setting JSON to true
	I0818 18:38:04.918625  159554 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":98429,"bootTime":1723907856,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0818 18:38:04.918732  159554 start.go:139] virtualization:  
	I0818 18:38:04.921665  159554 out.go:97] [download-only-591709] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0818 18:38:04.921839  159554 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball: no such file or directory
	I0818 18:38:04.921878  159554 notify.go:220] Checking for updates...
	I0818 18:38:04.923256  159554 out.go:169] MINIKUBE_LOCATION=19423
	I0818 18:38:04.925168  159554 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:38:04.927024  159554 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	I0818 18:38:04.928975  159554 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	I0818 18:38:04.930611  159554 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0818 18:38:04.933896  159554 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0818 18:38:04.934224  159554 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:38:04.961755  159554 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0818 18:38:04.961863  159554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 18:38:05.036806  159554 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-18 18:38:05.024249913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 18:38:05.036919  159554 docker.go:307] overlay module found
	I0818 18:38:05.039219  159554 out.go:97] Using the docker driver based on user configuration
	I0818 18:38:05.039275  159554 start.go:297] selected driver: docker
	I0818 18:38:05.039283  159554 start.go:901] validating driver "docker" against <nil>
	I0818 18:38:05.039402  159554 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 18:38:05.104433  159554 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-18 18:38:05.094089126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 18:38:05.104601  159554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 18:38:05.104898  159554 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0818 18:38:05.105060  159554 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 18:38:05.106974  159554 out.go:169] Using Docker driver with root privileges
	I0818 18:38:05.109091  159554 cni.go:84] Creating CNI manager for ""
	I0818 18:38:05.109112  159554 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0818 18:38:05.109132  159554 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0818 18:38:05.109220  159554 start.go:340] cluster config:
	{Name:download-only-591709 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-591709 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:38:05.110913  159554 out.go:97] Starting "download-only-591709" primary control-plane node in "download-only-591709" cluster
	I0818 18:38:05.110935  159554 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0818 18:38:05.112932  159554 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0818 18:38:05.112962  159554 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0818 18:38:05.113008  159554 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0818 18:38:05.129425  159554 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0818 18:38:05.130314  159554 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0818 18:38:05.130443  159554 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0818 18:38:05.167710  159554 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0818 18:38:05.167736  159554 cache.go:56] Caching tarball of preloaded images
	I0818 18:38:05.168583  159554 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0818 18:38:05.170611  159554 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0818 18:38:05.170635  159554 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0818 18:38:05.254720  159554 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0818 18:38:09.072022  159554 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	
	
	* The control-plane node download-only-591709 host does not exist
	  To start a cluster, run: "minikube start -p download-only-591709"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-591709
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (5.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-171941 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-171941 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.189178408s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (5.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-171941
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-171941: exit status 85 (66.805144ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-591709 | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC |                     |
	|         | -p download-only-591709        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:38 UTC |
	| delete  | -p download-only-591709        | download-only-591709 | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:38 UTC |
	| start   | -o=json --download-only        | download-only-171941 | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC |                     |
	|         | -p download-only-171941        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 18:38:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 18:38:12.357902  159756 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:38:12.358025  159756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:38:12.358036  159756 out.go:358] Setting ErrFile to fd 2...
	I0818 18:38:12.358042  159756 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:38:12.358273  159756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
	I0818 18:38:12.358671  159756 out.go:352] Setting JSON to true
	I0818 18:38:12.359525  159756 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":98437,"bootTime":1723907856,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0818 18:38:12.359594  159756 start.go:139] virtualization:  
	I0818 18:38:12.362117  159756 out.go:97] [download-only-171941] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0818 18:38:12.362361  159756 notify.go:220] Checking for updates...
	I0818 18:38:12.363891  159756 out.go:169] MINIKUBE_LOCATION=19423
	I0818 18:38:12.365659  159756 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:38:12.367882  159756 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	I0818 18:38:12.369741  159756 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	I0818 18:38:12.371534  159756 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0818 18:38:12.375036  159756 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0818 18:38:12.375348  159756 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:38:12.397097  159756 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0818 18:38:12.397198  159756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 18:38:12.470714  159756 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-18 18:38:12.460604238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 18:38:12.470825  159756 docker.go:307] overlay module found
	I0818 18:38:12.472783  159756 out.go:97] Using the docker driver based on user configuration
	I0818 18:38:12.472811  159756 start.go:297] selected driver: docker
	I0818 18:38:12.472818  159756 start.go:901] validating driver "docker" against <nil>
	I0818 18:38:12.472934  159756 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 18:38:12.523825  159756 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-08-18 18:38:12.514628495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 18:38:12.524008  159756 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 18:38:12.524307  159756 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0818 18:38:12.524466  159756 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 18:38:12.526261  159756 out.go:169] Using Docker driver with root privileges
	I0818 18:38:12.528380  159756 cni.go:84] Creating CNI manager for ""
	I0818 18:38:12.528401  159756 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0818 18:38:12.528414  159756 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0818 18:38:12.528497  159756 start.go:340] cluster config:
	{Name:download-only-171941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-171941 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:38:12.530131  159756 out.go:97] Starting "download-only-171941" primary control-plane node in "download-only-171941" cluster
	I0818 18:38:12.530154  159756 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0818 18:38:12.531975  159756 out.go:97] Pulling base image v0.0.44-1723740748-19452 ...
	I0818 18:38:12.532004  159756 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0818 18:38:12.532170  159756 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local docker daemon
	I0818 18:38:12.548214  159756 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d to local cache
	I0818 18:38:12.548354  159756 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory
	I0818 18:38:12.548378  159756 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d in local cache directory, skipping pull
	I0818 18:38:12.548383  159756 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d exists in cache, skipping pull
	I0818 18:38:12.548394  159756 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d as a tarball
	I0818 18:38:12.595760  159756 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0818 18:38:12.595809  159756 cache.go:56] Caching tarball of preloaded images
	I0818 18:38:12.596617  159756 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0818 18:38:12.599065  159756 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0818 18:38:12.599094  159756 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0818 18:38:12.682302  159756 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0818 18:38:15.912442  159756 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0818 18:38:15.912553  159756 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19423-154159/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0818 18:38:16.775477  159756 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0818 18:38:16.775857  159756 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/download-only-171941/config.json ...
	I0818 18:38:16.775890  159756 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/download-only-171941/config.json: {Name:mkaa0debec58ee1f8c7d2120b1b11acf19767a15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:16.776085  159756 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0818 18:38:16.776243  159756 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19423-154159/.minikube/cache/linux/arm64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-171941 host does not exist
	  To start a cluster, run: "minikube start -p download-only-171941"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-171941
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-829476 --alsologtostderr --binary-mirror http://127.0.0.1:41021 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-829476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-829476
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-677874
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-677874: exit status 85 (71.656141ms)

                                                
                                                
-- stdout --
	* Profile "addons-677874" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-677874"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-677874
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-677874: exit status 85 (80.060195ms)

                                                
                                                
-- stdout --
	* Profile "addons-677874" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-677874"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (215.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-677874 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-677874 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m35.536407501s)
--- PASS: TestAddons/Setup (215.54s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-677874 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-677874 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 6.445615ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-xmzmk" [f27b6e94-6969-4d20-b949-6bda33e68f47] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004395857s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kdttv" [e0470018-6849-4628-b89c-359ab1e73180] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003955904s
addons_test.go:342: (dbg) Run:  kubectl --context addons-677874 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-677874 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-677874 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.148899964s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.20s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-677874 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-677874 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-677874 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1ef2f586-4368-468a-b09f-6dd5e4ef26e4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1ef2f586-4368-468a-b09f-6dd5e4ef26e4] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003439803s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-677874 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-677874 addons disable ingress-dns --alsologtostderr -v=1: (1.848783715s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-677874 addons disable ingress --alsologtostderr -v=1: (7.842770565s)
--- PASS: TestAddons/parallel/Ingress (19.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zxrsl" [744d378f-5523-4ff3-ab7c-e5307c9971ed] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003892799s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-677874
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-677874: (5.955089677s)
--- PASS: TestAddons/parallel/InspektorGadget (11.96s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.88s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.454139ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-ssvzm" [5519d6dc-d96d-44ac-a20c-2ada1e90ed3e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003826348s
addons_test.go:417: (dbg) Run:  kubectl --context addons-677874 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.77s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.295977ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-677874 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/08/18 18:45:46 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-677874 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7bd17b1f-5c1b-40c4-a532-b7d114fc2d8a] Pending
helpers_test.go:344: "task-pv-pod" [7bd17b1f-5c1b-40c4-a532-b7d114fc2d8a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7bd17b1f-5c1b-40c4-a532-b7d114fc2d8a] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004600444s
addons_test.go:590: (dbg) Run:  kubectl --context addons-677874 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-677874 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-677874 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-677874 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-677874 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-677874 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-677874 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [961574f0-5bbd-4917-9ebd-ac9aa34d55ef] Pending
helpers_test.go:344: "task-pv-pod-restore" [961574f0-5bbd-4917-9ebd-ac9aa34d55ef] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [961574f0-5bbd-4917-9ebd-ac9aa34d55ef] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004675734s
addons_test.go:632: (dbg) Run:  kubectl --context addons-677874 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-677874 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-677874 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-677874 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.826715478s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (35.77s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.74s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-677874 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-px8qf" [5841518c-beb6-41fe-be08-6de7383c22fe] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-px8qf" [5841518c-beb6-41fe-be08-6de7383c22fe] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-px8qf" [5841518c-beb6-41fe-be08-6de7383c22fe] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004020056s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-677874 addons disable headlamp --alsologtostderr -v=1: (5.746281038s)
--- PASS: TestAddons/parallel/Headlamp (17.74s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-s5ms9" [30ade818-d4d1-467d-a7db-e0028ceb6856] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003846666s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-677874
--- PASS: TestAddons/parallel/CloudSpanner (6.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.78s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-677874 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-677874 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677874 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [05a789d3-659e-46e2-bfa7-b633b4e5a5b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [05a789d3-659e-46e2-bfa7-b633b4e5a5b0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [05a789d3-659e-46e2-bfa7-b633b4e5a5b0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003005727s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-677874 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 ssh "cat /opt/local-path-provisioner/pvc-13396e53-fbda-43b0-99ef-018bdd3364bc_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-677874 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-677874 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-677874 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.474368997s)
--- PASS: TestAddons/parallel/LocalPath (51.78s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5z977" [454d0444-42bd-4534-be28-2b2eb61b9f4b] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004794378s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-677874
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.67s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (12.03s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-vp5cf" [cd8a376d-890a-4799-a847-0b91865dbb83] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.009180568s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-677874 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-677874 addons disable yakd --alsologtostderr -v=1: (6.016049269s)
--- PASS: TestAddons/parallel/Yakd (12.03s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-677874
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-677874: (12.092039s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-677874
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-677874
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-677874
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (38.82s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-562344 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-562344 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.215228955s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-562344 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-562344 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-562344 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-562344" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-562344
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-562344: (1.979170832s)
--- PASS: TestCertOptions (38.82s)

                                                
                                    
x
+
TestCertExpiration (230.02s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-045829 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-045829 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.841823676s)
E0818 19:24:58.022543  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-045829 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-045829 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.423218665s)
helpers_test.go:175: Cleaning up "cert-expiration-045829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-045829
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-045829: (2.753840671s)
--- PASS: TestCertExpiration (230.02s)

                                                
                                    
x
+
TestForceSystemdFlag (37.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-451374 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-451374 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.857732234s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-451374 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-451374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-451374
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-451374: (2.487859308s)
--- PASS: TestForceSystemdFlag (37.69s)

                                                
                                    
x
+
TestForceSystemdEnv (39.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-467857 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-467857 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.786591579s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-467857 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-467857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-467857
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-467857: (2.377835373s)
--- PASS: TestForceSystemdEnv (39.47s)

                                                
                                    
x
+
TestDockerEnvContainerd (43.95s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-676806 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-676806 --driver=docker  --container-runtime=containerd: (28.534890165s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-676806"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ZuV8FWMRm3L3/agent.178730" SSH_AGENT_PID="178731" DOCKER_HOST=ssh://docker@127.0.0.1:38322 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ZuV8FWMRm3L3/agent.178730" SSH_AGENT_PID="178731" DOCKER_HOST=ssh://docker@127.0.0.1:38322 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ZuV8FWMRm3L3/agent.178730" SSH_AGENT_PID="178731" DOCKER_HOST=ssh://docker@127.0.0.1:38322 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.026630763s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-ZuV8FWMRm3L3/agent.178730" SSH_AGENT_PID="178731" DOCKER_HOST=ssh://docker@127.0.0.1:38322 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-676806" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-676806
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-676806: (1.976089811s)
--- PASS: TestDockerEnvContainerd (43.95s)

                                                
                                    
x
+
TestErrorSpam/setup (33.11s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-015355 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-015355 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-015355 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-015355 --driver=docker  --container-runtime=containerd: (33.113849045s)
--- PASS: TestErrorSpam/setup (33.11s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 stop: (1.283650755s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-015355 --log_dir /tmp/nospam-015355 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19423-154159/.minikube/files/etc/test/nested/copy/159549/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.67s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-249969 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-249969 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (51.664778715s)
--- PASS: TestFunctional/serial/StartWithProxy (51.67s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-249969 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-249969 --alsologtostderr -v=8: (5.980616997s)
functional_test.go:663: soft start took 5.988787757s for "functional-249969" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-249969 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-249969 cache add registry.k8s.io/pause:3.1: (1.549043402s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-249969 cache add registry.k8s.io/pause:3.3: (1.335130868s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-249969 cache add registry.k8s.io/pause:latest: (1.221704191s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-249969 /tmp/TestFunctionalserialCacheCmdcacheadd_local105474321/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 cache add minikube-local-cache-test:functional-249969
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 cache delete minikube-local-cache-test:functional-249969
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-249969
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-249969 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (293.845485ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-249969 cache reload: (1.12369103s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 kubectl -- --context functional-249969 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-249969 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.24s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-249969 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-249969 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.240338647s)
functional_test.go:761: restart took 47.240441604s for "functional-249969" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (47.24s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-249969 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-249969 logs: (1.707143381s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 logs --file /tmp/TestFunctionalserialLogsFileCmd690307725/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-249969 logs --file /tmp/TestFunctionalserialLogsFileCmd690307725/001/logs.txt: (1.689707085s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.49s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-249969 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-249969
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-249969: exit status 115 (619.886525ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31758 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-249969 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-249969 config get cpus: exit status 14 (72.276707ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-249969 config get cpus: exit status 14 (64.210644ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-249969 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-249969 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 194551: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-249969 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-249969 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (210.852695ms)

                                                
                                                
-- stdout --
	* [functional-249969] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 18:51:34.249213  193350 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:51:34.249403  193350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:51:34.249415  193350 out.go:358] Setting ErrFile to fd 2...
	I0818 18:51:34.249420  193350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:51:34.249651  193350 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
	I0818 18:51:34.250021  193350 out.go:352] Setting JSON to false
	I0818 18:51:34.251046  193350 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":99239,"bootTime":1723907856,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0818 18:51:34.251122  193350 start.go:139] virtualization:  
	I0818 18:51:34.253791  193350 out.go:177] * [functional-249969] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0818 18:51:34.256010  193350 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 18:51:34.256066  193350 notify.go:220] Checking for updates...
	I0818 18:51:34.260761  193350 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:51:34.262741  193350 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	I0818 18:51:34.264405  193350 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	I0818 18:51:34.266193  193350 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0818 18:51:34.268152  193350 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 18:51:34.270754  193350 config.go:182] Loaded profile config "functional-249969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0818 18:51:34.271467  193350 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:51:34.299059  193350 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0818 18:51:34.299212  193350 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 18:51:34.379902  193350 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-18 18:51:34.366840273 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 18:51:34.380006  193350 docker.go:307] overlay module found
	I0818 18:51:34.382273  193350 out.go:177] * Using the docker driver based on existing profile
	I0818 18:51:34.384039  193350 start.go:297] selected driver: docker
	I0818 18:51:34.384058  193350 start.go:901] validating driver "docker" against &{Name:functional-249969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-249969 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:51:34.384167  193350 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 18:51:34.386714  193350 out.go:201] 
	W0818 18:51:34.388623  193350 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0818 18:51:34.390282  193350 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-249969 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-249969 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-249969 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (189.391266ms)

                                                
                                                
-- stdout --
	* [functional-249969] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 18:51:37.641866  194318 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:51:37.642032  194318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:51:37.642060  194318 out.go:358] Setting ErrFile to fd 2...
	I0818 18:51:37.642067  194318 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:51:37.642827  194318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
	I0818 18:51:37.643284  194318 out.go:352] Setting JSON to false
	I0818 18:51:37.644386  194318 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":99242,"bootTime":1723907856,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0818 18:51:37.644509  194318 start.go:139] virtualization:  
	I0818 18:51:37.647890  194318 out.go:177] * [functional-249969] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0818 18:51:37.649948  194318 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 18:51:37.650086  194318 notify.go:220] Checking for updates...
	I0818 18:51:37.653944  194318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:51:37.656039  194318 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	I0818 18:51:37.657690  194318 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	I0818 18:51:37.659617  194318 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0818 18:51:37.661302  194318 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 18:51:37.664299  194318 config.go:182] Loaded profile config "functional-249969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0818 18:51:37.664914  194318 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:51:37.697437  194318 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0818 18:51:37.697561  194318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 18:51:37.754792  194318 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-18 18:51:37.745172896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 18:51:37.754904  194318 docker.go:307] overlay module found
	I0818 18:51:37.757918  194318 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0818 18:51:37.759985  194318 start.go:297] selected driver: docker
	I0818 18:51:37.760003  194318 start.go:901] validating driver "docker" against &{Name:functional-249969 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-249969 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:51:37.760140  194318 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 18:51:37.762760  194318 out.go:201] 
	W0818 18:51:37.764542  194318 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0818 18:51:37.766173  194318 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-249969 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-249969 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-nfxcn" [0d7c6eab-da2d-4d9d-85a1-5157bcb66a5c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-nfxcn" [0d7c6eab-da2d-4d9d-85a1-5157bcb66a5c] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004354997s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31767
functional_test.go:1675: http://192.168.49.2:31767: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-nfxcn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31767
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [93cc789a-ab4a-4675-b332-7ef2bf5eb596] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003324173s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-249969 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-249969 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-249969 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-249969 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [51cf2b50-9e4c-4143-bfe7-10e6c6b5a217] Pending
helpers_test.go:344: "sp-pod" [51cf2b50-9e4c-4143-bfe7-10e6c6b5a217] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [51cf2b50-9e4c-4143-bfe7-10e6c6b5a217] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003718652s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-249969 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-249969 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-249969 delete -f testdata/storage-provisioner/pod.yaml: (1.214713221s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-249969 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [17f62c01-54e6-4fbe-a7fd-07f4098929be] Pending
helpers_test.go:344: "sp-pod" [17f62c01-54e6-4fbe-a7fd-07f4098929be] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [17f62c01-54e6-4fbe-a7fd-07f4098929be] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.007213453s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-249969 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.28s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh -n functional-249969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 cp functional-249969:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd635722567/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh -n functional-249969 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh -n functional-249969 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/159549/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "sudo cat /etc/test/nested/copy/159549/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/159549.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "sudo cat /etc/ssl/certs/159549.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/159549.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "sudo cat /usr/share/ca-certificates/159549.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/1595492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "sudo cat /etc/ssl/certs/1595492.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/1595492.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "sudo cat /usr/share/ca-certificates/1595492.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-249969 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-249969 ssh "sudo systemctl is-active docker": exit status 1 (369.820649ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-249969 ssh "sudo systemctl is-active crio": exit status 1 (344.708221ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 version --short
--- PASS: TestFunctional/parallel/Version/short (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-249969 version -o=json --components: (1.264359073s)
--- PASS: TestFunctional/parallel/Version/components (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-249969 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-249969
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-249969
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-249969 image ls --format short --alsologtostderr:
I0818 18:51:47.839950  195963 out.go:345] Setting OutFile to fd 1 ...
I0818 18:51:47.840143  195963 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:51:47.840156  195963 out.go:358] Setting ErrFile to fd 2...
I0818 18:51:47.840161  195963 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:51:47.840428  195963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
I0818 18:51:47.841084  195963 config.go:182] Loaded profile config "functional-249969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0818 18:51:47.841247  195963 config.go:182] Loaded profile config "functional-249969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0818 18:51:47.841768  195963 cli_runner.go:164] Run: docker container inspect functional-249969 --format={{.State.Status}}
I0818 18:51:47.860478  195963 ssh_runner.go:195] Run: systemctl --version
I0818 18:51:47.860534  195963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-249969
I0818 18:51:47.882869  195963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38332 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/functional-249969/id_rsa Username:docker}
I0818 18:51:47.976368  195963 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-249969 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/minikube-local-cache-test | functional-249969  | sha256:736472 | 991B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| docker.io/library/nginx                     | latest             | sha256:a9dfdb | 67.7MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kicbase/echo-server               | functional-249969  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | alpine             | sha256:70594c | 19.6MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-249969 image ls --format table --alsologtostderr:
I0818 18:51:48.929727  196211 out.go:345] Setting OutFile to fd 1 ...
I0818 18:51:48.929902  196211 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:51:48.929915  196211 out.go:358] Setting ErrFile to fd 2...
I0818 18:51:48.929921  196211 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:51:48.930197  196211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
I0818 18:51:48.930857  196211 config.go:182] Loaded profile config "functional-249969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0818 18:51:48.931028  196211 config.go:182] Loaded profile config "functional-249969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0818 18:51:48.931550  196211 cli_runner.go:164] Run: docker container inspect functional-249969 --format={{.State.Status}}
I0818 18:51:48.948924  196211 ssh_runner.go:195] Run: systemctl --version
I0818 18:51:48.948977  196211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-249969
I0818 18:51:48.966237  196211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38332 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/functional-249969/id_rsa Username:docker}
I0818 18:51:49.056605  196211 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-249969 image ls --format json --alsologtostderr:
[{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","r
epoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19627164"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdc
b91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1
924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-249969"],"size":"2173567"},{"id":"sha256:736472db99e48e284488949f04462ac1c340c5bd409a4d7e2671aefa3fef9241","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-249969"],"size":"991"},{"id":"sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add"],"repoTags":["docker.io/libra
ry/nginx:latest"],"size":"67690150"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-249969 image ls --format json --alsologtostderr:
I0818 18:51:48.662307  196142 out.go:345] Setting OutFile to fd 1 ...
I0818 18:51:48.662500  196142 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:51:48.662506  196142 out.go:358] Setting ErrFile to fd 2...
I0818 18:51:48.662512  196142 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:51:48.662774  196142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
I0818 18:51:48.663404  196142 config.go:182] Loaded profile config "functional-249969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0818 18:51:48.663520  196142 config.go:182] Loaded profile config "functional-249969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0818 18:51:48.664107  196142 cli_runner.go:164] Run: docker container inspect functional-249969 --format={{.State.Status}}
I0818 18:51:48.685274  196142 ssh_runner.go:195] Run: systemctl --version
I0818 18:51:48.685335  196142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-249969
I0818 18:51:48.716597  196142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38332 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/functional-249969/id_rsa Username:docker}
I0818 18:51:48.833199  196142 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-249969 image ls --format yaml --alsologtostderr:
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "19627164"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-249969
size: "2173567"
- id: sha256:736472db99e48e284488949f04462ac1c340c5bd409a4d7e2671aefa3fef9241
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-249969
size: "991"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
repoTags:
- docker.io/library/nginx:latest
size: "67690150"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-249969 image ls --format yaml --alsologtostderr:
I0818 18:51:48.378345  196078 out.go:345] Setting OutFile to fd 1 ...
I0818 18:51:48.378600  196078 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:51:48.378612  196078 out.go:358] Setting ErrFile to fd 2...
I0818 18:51:48.378618  196078 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:51:48.378878  196078 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
I0818 18:51:48.379624  196078 config.go:182] Loaded profile config "functional-249969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0818 18:51:48.379763  196078 config.go:182] Loaded profile config "functional-249969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0818 18:51:48.380330  196078 cli_runner.go:164] Run: docker container inspect functional-249969 --format={{.State.Status}}
I0818 18:51:48.409383  196078 ssh_runner.go:195] Run: systemctl --version
I0818 18:51:48.409452  196078 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-249969
I0818 18:51:48.438849  196078 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38332 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/functional-249969/id_rsa Username:docker}
I0818 18:51:48.540508  196078 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-249969 ssh pgrep buildkitd: exit status 1 (302.865223ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image build -t localhost/my-image:functional-249969 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-249969 image build -t localhost/my-image:functional-249969 testdata/build --alsologtostderr: (2.463048938s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-249969 image build -t localhost/my-image:functional-249969 testdata/build --alsologtostderr:
I0818 18:51:48.401819  196084 out.go:345] Setting OutFile to fd 1 ...
I0818 18:51:48.403470  196084 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:51:48.403530  196084 out.go:358] Setting ErrFile to fd 2...
I0818 18:51:48.403552  196084 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:51:48.403900  196084 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
I0818 18:51:48.404610  196084 config.go:182] Loaded profile config "functional-249969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0818 18:51:48.406621  196084 config.go:182] Loaded profile config "functional-249969": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0818 18:51:48.407381  196084 cli_runner.go:164] Run: docker container inspect functional-249969 --format={{.State.Status}}
I0818 18:51:48.432041  196084 ssh_runner.go:195] Run: systemctl --version
I0818 18:51:48.432105  196084 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-249969
I0818 18:51:48.464111  196084 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38332 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/functional-249969/id_rsa Username:docker}
I0818 18:51:48.568390  196084 build_images.go:161] Building image from path: /tmp/build.3554029065.tar
I0818 18:51:48.568493  196084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0818 18:51:48.585852  196084 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3554029065.tar
I0818 18:51:48.590284  196084 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3554029065.tar: stat -c "%s %y" /var/lib/minikube/build/build.3554029065.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3554029065.tar': No such file or directory
I0818 18:51:48.590357  196084 ssh_runner.go:362] scp /tmp/build.3554029065.tar --> /var/lib/minikube/build/build.3554029065.tar (3072 bytes)
I0818 18:51:48.626226  196084 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3554029065
I0818 18:51:48.636647  196084 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3554029065 -xf /var/lib/minikube/build/build.3554029065.tar
I0818 18:51:48.646611  196084 containerd.go:394] Building image: /var/lib/minikube/build/build.3554029065
I0818 18:51:48.646683  196084 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3554029065 --local dockerfile=/var/lib/minikube/build/build.3554029065 --output type=image,name=localhost/my-image:functional-249969
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:268e26f6342edb41ccd8d4c7781216e1d9f80e06d664eb2740cd85916a8de187 0.0s done
#8 exporting config sha256:ff9de6b03bc747aab212cfdc8ecf554e83232d70c724434473de12d9abfde8ef 0.0s done
#8 naming to localhost/my-image:functional-249969 done
#8 DONE 0.1s
I0818 18:51:50.775320  196084 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3554029065 --local dockerfile=/var/lib/minikube/build/build.3554029065 --output type=image,name=localhost/my-image:functional-249969: (2.128612145s)
I0818 18:51:50.775388  196084 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3554029065
I0818 18:51:50.784130  196084 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3554029065.tar
I0818 18:51:50.792887  196084 build_images.go:217] Built localhost/my-image:functional-249969 from /tmp/build.3554029065.tar
I0818 18:51:50.792920  196084 build_images.go:133] succeeded building to: functional-249969
I0818 18:51:50.792926  196084 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-249969
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image load --daemon kicbase/echo-server:functional-249969 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-249969 image load --daemon kicbase/echo-server:functional-249969 --alsologtostderr: (1.235082613s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image load --daemon kicbase/echo-server:functional-249969 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-249969 image load --daemon kicbase/echo-server:functional-249969 --alsologtostderr: (1.037919514s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-249969
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image load --daemon kicbase/echo-server:functional-249969 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-249969 image load --daemon kicbase/echo-server:functional-249969 --alsologtostderr: (1.149357976s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "378.383851ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "96.582187ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "401.871735ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "82.939972ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image save kicbase/echo-server:functional-249969 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-249969 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-249969 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-249969 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 191804: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-249969 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image rm kicbase/echo-server:functional-249969 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-249969 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-249969 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7ee3ff98-4e2f-4013-b6aa-43de4d58c510] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7ee3ff98-4e2f-4013-b6aa-43de4d58c510] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004011045s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-249969
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 image save --daemon kicbase/echo-server:functional-249969 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-249969
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-249969 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.23.163 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-249969 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-249969 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-249969 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-4k8mf" [4fa2ff9a-e4c5-4dcd-a4ab-6d01643e8570] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-4k8mf" [4fa2ff9a-e4c5-4dcd-a4ab-6d01643e8570] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005054215s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 service list -o json
functional_test.go:1494: Took "651.469846ms" to run "out/minikube-linux-arm64 -p functional-249969 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-249969 /tmp/TestFunctionalparallelMountCmdany-port2286872877/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724007094684628742" to /tmp/TestFunctionalparallelMountCmdany-port2286872877/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724007094684628742" to /tmp/TestFunctionalparallelMountCmdany-port2286872877/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724007094684628742" to /tmp/TestFunctionalparallelMountCmdany-port2286872877/001/test-1724007094684628742
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-249969 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (416.59957ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 18 18:51 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 18 18:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 18 18:51 test-1724007094684628742
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh cat /mount-9p/test-1724007094684628742
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-249969 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [5a8877bb-4c13-4abb-8639-b38174016b42] Pending
helpers_test.go:344: "busybox-mount" [5a8877bb-4c13-4abb-8639-b38174016b42] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [5a8877bb-4c13-4abb-8639-b38174016b42] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [5a8877bb-4c13-4abb-8639-b38174016b42] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004076894s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-249969 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-249969 /tmp/TestFunctionalparallelMountCmdany-port2286872877/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32431
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32431
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-249969 /tmp/TestFunctionalparallelMountCmdspecific-port1988010923/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-249969 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (475.040386ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-249969 /tmp/TestFunctionalparallelMountCmdspecific-port1988010923/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-249969 ssh "sudo umount -f /mount-9p": exit status 1 (379.594105ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-249969 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-249969 /tmp/TestFunctionalparallelMountCmdspecific-port1988010923/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-249969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2012434555/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-249969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2012434555/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-249969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2012434555/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-249969 ssh "findmnt -T" /mount1: exit status 1 (906.675495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "findmnt -T" /mount2
2024/08/18 18:51:46 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-249969 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-249969 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-249969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2012434555/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-249969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2012434555/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-249969 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2012434555/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.37s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-249969
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-249969
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-249969
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (120.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-939233 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0818 18:51:54.941740  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:54.959023  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:54.970930  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:54.993461  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:55.034731  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:55.116765  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:55.278231  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:55.600059  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:56.242131  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:51:57.523721  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:52:00.086451  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:52:05.208308  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:52:15.449795  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:52:35.931081  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:16.893194  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-939233 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m59.325330388s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (120.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (31.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-939233 -- rollout status deployment/busybox: (28.158577401s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-9585l -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-nk5wd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-v46sj -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-9585l -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-nk5wd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-v46sj -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-9585l -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-nk5wd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-v46sj -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (31.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-9585l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-9585l -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-nk5wd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-nk5wd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-v46sj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-939233 -- exec busybox-7dff88458-v46sj -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-939233 -v=7 --alsologtostderr
E0818 18:54:38.815468  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-939233 -v=7 --alsologtostderr: (22.809872048s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr: (1.036234709s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-939233 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp testdata/cp-test.txt ha-939233:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4112084684/001/cp-test_ha-939233.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233:/home/docker/cp-test.txt ha-939233-m02:/home/docker/cp-test_ha-939233_ha-939233-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m02 "sudo cat /home/docker/cp-test_ha-939233_ha-939233-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233:/home/docker/cp-test.txt ha-939233-m03:/home/docker/cp-test_ha-939233_ha-939233-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m03 "sudo cat /home/docker/cp-test_ha-939233_ha-939233-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233:/home/docker/cp-test.txt ha-939233-m04:/home/docker/cp-test_ha-939233_ha-939233-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m04 "sudo cat /home/docker/cp-test_ha-939233_ha-939233-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp testdata/cp-test.txt ha-939233-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4112084684/001/cp-test_ha-939233-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233-m02:/home/docker/cp-test.txt ha-939233:/home/docker/cp-test_ha-939233-m02_ha-939233.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233 "sudo cat /home/docker/cp-test_ha-939233-m02_ha-939233.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233-m02:/home/docker/cp-test.txt ha-939233-m03:/home/docker/cp-test_ha-939233-m02_ha-939233-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m03 "sudo cat /home/docker/cp-test_ha-939233-m02_ha-939233-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233-m02:/home/docker/cp-test.txt ha-939233-m04:/home/docker/cp-test_ha-939233-m02_ha-939233-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m04 "sudo cat /home/docker/cp-test_ha-939233-m02_ha-939233-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp testdata/cp-test.txt ha-939233-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4112084684/001/cp-test_ha-939233-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233-m03:/home/docker/cp-test.txt ha-939233:/home/docker/cp-test_ha-939233-m03_ha-939233.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233 "sudo cat /home/docker/cp-test_ha-939233-m03_ha-939233.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233-m03:/home/docker/cp-test.txt ha-939233-m02:/home/docker/cp-test_ha-939233-m03_ha-939233-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m02 "sudo cat /home/docker/cp-test_ha-939233-m03_ha-939233-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233-m03:/home/docker/cp-test.txt ha-939233-m04:/home/docker/cp-test_ha-939233-m03_ha-939233-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m04 "sudo cat /home/docker/cp-test_ha-939233-m03_ha-939233-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp testdata/cp-test.txt ha-939233-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4112084684/001/cp-test_ha-939233-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233-m04:/home/docker/cp-test.txt ha-939233:/home/docker/cp-test_ha-939233-m04_ha-939233.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233 "sudo cat /home/docker/cp-test_ha-939233-m04_ha-939233.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233-m04:/home/docker/cp-test.txt ha-939233-m02:/home/docker/cp-test_ha-939233-m04_ha-939233-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m02 "sudo cat /home/docker/cp-test_ha-939233-m04_ha-939233-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 cp ha-939233-m04:/home/docker/cp-test.txt ha-939233-m03:/home/docker/cp-test_ha-939233-m04_ha-939233-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 ssh -n ha-939233-m03 "sudo cat /home/docker/cp-test_ha-939233-m04_ha-939233-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-939233 node stop m02 -v=7 --alsologtostderr: (12.124267164s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr: exit status 7 (766.443363ms)

                                                
                                                
-- stdout --
	ha-939233
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-939233-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-939233-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-939233-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 18:55:22.967895  212450 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:55:22.980253  212450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:55:22.980286  212450 out.go:358] Setting ErrFile to fd 2...
	I0818 18:55:22.980309  212450 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:55:22.980660  212450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
	I0818 18:55:22.980929  212450 out.go:352] Setting JSON to false
	I0818 18:55:22.981022  212450 mustload.go:65] Loading cluster: ha-939233
	I0818 18:55:22.981118  212450 notify.go:220] Checking for updates...
	I0818 18:55:22.981547  212450 config.go:182] Loaded profile config "ha-939233": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0818 18:55:22.981602  212450 status.go:255] checking status of ha-939233 ...
	I0818 18:55:22.982432  212450 cli_runner.go:164] Run: docker container inspect ha-939233 --format={{.State.Status}}
	I0818 18:55:23.002689  212450 status.go:330] ha-939233 host status = "Running" (err=<nil>)
	I0818 18:55:23.002727  212450 host.go:66] Checking if "ha-939233" exists ...
	I0818 18:55:23.003078  212450 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-939233
	I0818 18:55:23.048056  212450 host.go:66] Checking if "ha-939233" exists ...
	I0818 18:55:23.048357  212450 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 18:55:23.048403  212450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-939233
	I0818 18:55:23.074485  212450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38337 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/ha-939233/id_rsa Username:docker}
	I0818 18:55:23.165680  212450 ssh_runner.go:195] Run: systemctl --version
	I0818 18:55:23.170257  212450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:55:23.182342  212450 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 18:55:23.236126  212450 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-18 18:55:23.225809331 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 18:55:23.236762  212450 kubeconfig.go:125] found "ha-939233" server: "https://192.168.49.254:8443"
	I0818 18:55:23.236814  212450 api_server.go:166] Checking apiserver status ...
	I0818 18:55:23.236878  212450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:55:23.250767  212450 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1407/cgroup
	I0818 18:55:23.266555  212450 api_server.go:182] apiserver freezer: "6:freezer:/docker/a69f0a362db8ffd784801ee74b7d76668c4b7c4fafcb43d6e3b1fac57eb93a8d/kubepods/burstable/pod082a379b87b49a69ddc853e819fa91a3/65a3fba559ee2d510526e6fbc13d11da19f794d3feac96618762f84b8f429cbf"
	I0818 18:55:23.266633  212450 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a69f0a362db8ffd784801ee74b7d76668c4b7c4fafcb43d6e3b1fac57eb93a8d/kubepods/burstable/pod082a379b87b49a69ddc853e819fa91a3/65a3fba559ee2d510526e6fbc13d11da19f794d3feac96618762f84b8f429cbf/freezer.state
	I0818 18:55:23.275598  212450 api_server.go:204] freezer state: "THAWED"
	I0818 18:55:23.275628  212450 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0818 18:55:23.283989  212450 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0818 18:55:23.284018  212450 status.go:422] ha-939233 apiserver status = Running (err=<nil>)
	I0818 18:55:23.284048  212450 status.go:257] ha-939233 status: &{Name:ha-939233 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 18:55:23.284072  212450 status.go:255] checking status of ha-939233-m02 ...
	I0818 18:55:23.284397  212450 cli_runner.go:164] Run: docker container inspect ha-939233-m02 --format={{.State.Status}}
	I0818 18:55:23.301694  212450 status.go:330] ha-939233-m02 host status = "Stopped" (err=<nil>)
	I0818 18:55:23.301722  212450 status.go:343] host is not running, skipping remaining checks
	I0818 18:55:23.301743  212450 status.go:257] ha-939233-m02 status: &{Name:ha-939233-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 18:55:23.301764  212450 status.go:255] checking status of ha-939233-m03 ...
	I0818 18:55:23.302088  212450 cli_runner.go:164] Run: docker container inspect ha-939233-m03 --format={{.State.Status}}
	I0818 18:55:23.319373  212450 status.go:330] ha-939233-m03 host status = "Running" (err=<nil>)
	I0818 18:55:23.319400  212450 host.go:66] Checking if "ha-939233-m03" exists ...
	I0818 18:55:23.319749  212450 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-939233-m03
	I0818 18:55:23.340427  212450 host.go:66] Checking if "ha-939233-m03" exists ...
	I0818 18:55:23.342564  212450 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 18:55:23.342614  212450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-939233-m03
	I0818 18:55:23.372751  212450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38347 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/ha-939233-m03/id_rsa Username:docker}
	I0818 18:55:23.465548  212450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:55:23.477474  212450 kubeconfig.go:125] found "ha-939233" server: "https://192.168.49.254:8443"
	I0818 18:55:23.477502  212450 api_server.go:166] Checking apiserver status ...
	I0818 18:55:23.477564  212450 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:55:23.490743  212450 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1385/cgroup
	I0818 18:55:23.500480  212450 api_server.go:182] apiserver freezer: "6:freezer:/docker/531f97835adc8b7e9ce25ed3db2310c4f199cb7c25b093c9f7729373db045381/kubepods/burstable/pod1dd032fbf03ee954719a1926ad3dc143/4b8f8100245275c3aa10081e30d24c6c0a60b12d17fe1a32d14465e88e27e360"
	I0818 18:55:23.500560  212450 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/531f97835adc8b7e9ce25ed3db2310c4f199cb7c25b093c9f7729373db045381/kubepods/burstable/pod1dd032fbf03ee954719a1926ad3dc143/4b8f8100245275c3aa10081e30d24c6c0a60b12d17fe1a32d14465e88e27e360/freezer.state
	I0818 18:55:23.509622  212450 api_server.go:204] freezer state: "THAWED"
	I0818 18:55:23.509658  212450 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0818 18:55:23.517639  212450 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0818 18:55:23.517669  212450 status.go:422] ha-939233-m03 apiserver status = Running (err=<nil>)
	I0818 18:55:23.517679  212450 status.go:257] ha-939233-m03 status: &{Name:ha-939233-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 18:55:23.517717  212450 status.go:255] checking status of ha-939233-m04 ...
	I0818 18:55:23.518062  212450 cli_runner.go:164] Run: docker container inspect ha-939233-m04 --format={{.State.Status}}
	I0818 18:55:23.535999  212450 status.go:330] ha-939233-m04 host status = "Running" (err=<nil>)
	I0818 18:55:23.536025  212450 host.go:66] Checking if "ha-939233-m04" exists ...
	I0818 18:55:23.536364  212450 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-939233-m04
	I0818 18:55:23.553781  212450 host.go:66] Checking if "ha-939233-m04" exists ...
	I0818 18:55:23.554167  212450 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 18:55:23.554245  212450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-939233-m04
	I0818 18:55:23.575000  212450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38352 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/ha-939233-m04/id_rsa Username:docker}
	I0818 18:55:23.665075  212450 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:55:23.677799  212450 status.go:257] ha-939233-m04 status: &{Name:ha-939233-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-939233 node start m02 -v=7 --alsologtostderr: (17.081416545s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr: (1.024067742s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (140.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-939233 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-939233 -v=7 --alsologtostderr
E0818 18:56:07.079322  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:07.085749  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:07.097210  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:07.118667  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:07.160132  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:07.241628  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:07.403262  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:07.725061  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:08.367195  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:09.648998  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-939233 -v=7 --alsologtostderr: (26.45649523s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-939233 --wait=true -v=7 --alsologtostderr
E0818 18:56:12.211364  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:17.333373  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:27.574696  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:48.055993  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:54.941201  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:57:22.657800  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:57:29.017944  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-939233 --wait=true -v=7 --alsologtostderr: (1m54.226884775s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-939233
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (140.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-939233 node delete m03 -v=7 --alsologtostderr: (8.624659521s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-939233 stop -v=7 --alsologtostderr: (35.906827863s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr: exit status 7 (116.921383ms)

                                                
                                                
-- stdout --
	ha-939233
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-939233-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-939233-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 18:58:50.203282  226189 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:58:50.203411  226189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:58:50.203420  226189 out.go:358] Setting ErrFile to fd 2...
	I0818 18:58:50.203425  226189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:58:50.203643  226189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
	I0818 18:58:50.203862  226189 out.go:352] Setting JSON to false
	I0818 18:58:50.203901  226189 mustload.go:65] Loading cluster: ha-939233
	I0818 18:58:50.203992  226189 notify.go:220] Checking for updates...
	I0818 18:58:50.204352  226189 config.go:182] Loaded profile config "ha-939233": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0818 18:58:50.204365  226189 status.go:255] checking status of ha-939233 ...
	I0818 18:58:50.204855  226189 cli_runner.go:164] Run: docker container inspect ha-939233 --format={{.State.Status}}
	I0818 18:58:50.222927  226189 status.go:330] ha-939233 host status = "Stopped" (err=<nil>)
	I0818 18:58:50.222951  226189 status.go:343] host is not running, skipping remaining checks
	I0818 18:58:50.222959  226189 status.go:257] ha-939233 status: &{Name:ha-939233 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 18:58:50.222989  226189 status.go:255] checking status of ha-939233-m02 ...
	I0818 18:58:50.223323  226189 cli_runner.go:164] Run: docker container inspect ha-939233-m02 --format={{.State.Status}}
	I0818 18:58:50.253389  226189 status.go:330] ha-939233-m02 host status = "Stopped" (err=<nil>)
	I0818 18:58:50.253430  226189 status.go:343] host is not running, skipping remaining checks
	I0818 18:58:50.253439  226189 status.go:257] ha-939233-m02 status: &{Name:ha-939233-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 18:58:50.253465  226189 status.go:255] checking status of ha-939233-m04 ...
	I0818 18:58:50.253768  226189 cli_runner.go:164] Run: docker container inspect ha-939233-m04 --format={{.State.Status}}
	I0818 18:58:50.271387  226189 status.go:330] ha-939233-m04 host status = "Stopped" (err=<nil>)
	I0818 18:58:50.271410  226189 status.go:343] host is not running, skipping remaining checks
	I0818 18:58:50.271418  226189 status.go:257] ha-939233-m04 status: &{Name:ha-939233-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (68.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-939233 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0818 18:58:50.939291  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-939233 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m6.807482396s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr
ha_test.go:566: (dbg) Done: out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr: (1.074052944s)
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (68.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (50.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-939233 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-939233 --control-plane -v=7 --alsologtostderr: (49.362064588s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-939233 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (50.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.74s)

                                                
                                    
x
+
TestJSONOutput/start/Command (56.04s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-946688 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0818 19:01:07.079322  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:01:34.780682  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-946688 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (56.03771501s)
--- PASS: TestJSONOutput/start/Command (56.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-946688 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-946688 --output=json --user=testUser
E0818 19:01:54.941308  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-946688 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-946688 --output=json --user=testUser: (5.765613358s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-820027 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-820027 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (70.481987ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8ddb849b-94a7-4c4a-ba3a-33e89939b113","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-820027] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e8fbd504-f96b-425d-a0ce-d482043969a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"1a5310b9-775e-4822-a878-8b36fe55e89d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"832b5c55-435d-489e-8073-fe4ba057c2c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig"}}
	{"specversion":"1.0","id":"ac5d8aaf-206a-43ef-a738-c4f6831531d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube"}}
	{"specversion":"1.0","id":"56094725-07c9-4558-9347-2994d1184650","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"fe1779de-2168-45eb-9d75-0a17c7f92e50","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"19399dad-fe99-4daf-9b36-554b5c25dd5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-820027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-820027
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-062968 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-062968 --network=: (38.58361325s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-062968" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-062968
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-062968: (2.11755651s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.72s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-359475 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-359475 --network=bridge: (30.57822281s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-359475" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-359475
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-359475: (2.034366822s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.64s)

                                                
                                    
x
+
TestKicExistingNetwork (32.29s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-957218 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-957218 --network=existing-network: (30.203305847s)
helpers_test.go:175: Cleaning up "existing-network-957218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-957218
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-957218: (1.937913522s)
--- PASS: TestKicExistingNetwork (32.29s)

                                                
                                    
x
+
TestKicCustomSubnet (36.08s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-911102 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-911102 --subnet=192.168.60.0/24: (33.963140632s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-911102 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-911102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-911102
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-911102: (2.088536963s)
--- PASS: TestKicCustomSubnet (36.08s)

                                                
                                    
x
+
TestKicStaticIP (38.98s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-341404 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-341404 --static-ip=192.168.200.200: (36.71257464s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-341404 ip
helpers_test.go:175: Cleaning up "static-ip-341404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-341404
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-341404: (2.096973724s)
--- PASS: TestKicStaticIP (38.98s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (65.98s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-899264 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-899264 --driver=docker  --container-runtime=containerd: (27.879902298s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-902168 --driver=docker  --container-runtime=containerd
E0818 19:06:07.079398  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-902168 --driver=docker  --container-runtime=containerd: (32.583944319s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-899264
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-902168
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-902168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-902168
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-902168: (1.987813425s)
helpers_test.go:175: Cleaning up "first-899264" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-899264
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-899264: (2.234918168s)
--- PASS: TestMinikubeProfile (65.98s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.35s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-173917 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-173917 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.350090477s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-173917 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.15s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-187193 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-187193 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.148988539s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.15s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-187193 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-173917 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-173917 --alsologtostderr -v=5: (1.608224438s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-187193 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-187193
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-187193: (1.191184732s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.55s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-187193
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-187193: (6.544284726s)
--- PASS: TestMountStart/serial/RestartStopped (7.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-187193 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-190569 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0818 19:06:54.940799  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-190569 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m22.537269613s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.09s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-190569 -- rollout status deployment/busybox: (13.407258261s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- exec busybox-7dff88458-qpsqf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- exec busybox-7dff88458-tmvd7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- exec busybox-7dff88458-qpsqf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- exec busybox-7dff88458-tmvd7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- exec busybox-7dff88458-qpsqf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- exec busybox-7dff88458-tmvd7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- exec busybox-7dff88458-qpsqf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- exec busybox-7dff88458-qpsqf -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- exec busybox-7dff88458-tmvd7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-190569 -- exec busybox-7dff88458-tmvd7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-190569 -v 3 --alsologtostderr
E0818 19:08:18.019963  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-190569 -v 3 --alsologtostderr: (18.045188303s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.72s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-190569 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 cp testdata/cp-test.txt multinode-190569:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 cp multinode-190569:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1031378147/001/cp-test_multinode-190569.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 cp multinode-190569:/home/docker/cp-test.txt multinode-190569-m02:/home/docker/cp-test_multinode-190569_multinode-190569-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569-m02 "sudo cat /home/docker/cp-test_multinode-190569_multinode-190569-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 cp multinode-190569:/home/docker/cp-test.txt multinode-190569-m03:/home/docker/cp-test_multinode-190569_multinode-190569-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569-m03 "sudo cat /home/docker/cp-test_multinode-190569_multinode-190569-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 cp testdata/cp-test.txt multinode-190569-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 cp multinode-190569-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1031378147/001/cp-test_multinode-190569-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 cp multinode-190569-m02:/home/docker/cp-test.txt multinode-190569:/home/docker/cp-test_multinode-190569-m02_multinode-190569.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569 "sudo cat /home/docker/cp-test_multinode-190569-m02_multinode-190569.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 cp multinode-190569-m02:/home/docker/cp-test.txt multinode-190569-m03:/home/docker/cp-test_multinode-190569-m02_multinode-190569-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569-m03 "sudo cat /home/docker/cp-test_multinode-190569-m02_multinode-190569-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 cp testdata/cp-test.txt multinode-190569-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 cp multinode-190569-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1031378147/001/cp-test_multinode-190569-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 cp multinode-190569-m03:/home/docker/cp-test.txt multinode-190569:/home/docker/cp-test_multinode-190569-m03_multinode-190569.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569 "sudo cat /home/docker/cp-test_multinode-190569-m03_multinode-190569.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 cp multinode-190569-m03:/home/docker/cp-test.txt multinode-190569-m02:/home/docker/cp-test_multinode-190569-m03_multinode-190569-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 ssh -n multinode-190569-m02 "sudo cat /home/docker/cp-test_multinode-190569-m03_multinode-190569-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-190569 node stop m03: (1.221414229s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-190569 status: exit status 7 (519.734285ms)

                                                
                                                
-- stdout --
	multinode-190569
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-190569-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-190569-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-190569 status --alsologtostderr: exit status 7 (505.368628ms)

                                                
                                                
-- stdout --
	multinode-190569
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-190569-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-190569-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:08:49.101705  279709 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:08:49.101923  279709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:08:49.101953  279709 out.go:358] Setting ErrFile to fd 2...
	I0818 19:08:49.101977  279709 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:08:49.102223  279709 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
	I0818 19:08:49.102444  279709 out.go:352] Setting JSON to false
	I0818 19:08:49.102513  279709 mustload.go:65] Loading cluster: multinode-190569
	I0818 19:08:49.102623  279709 notify.go:220] Checking for updates...
	I0818 19:08:49.102943  279709 config.go:182] Loaded profile config "multinode-190569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0818 19:08:49.102964  279709 status.go:255] checking status of multinode-190569 ...
	I0818 19:08:49.103851  279709 cli_runner.go:164] Run: docker container inspect multinode-190569 --format={{.State.Status}}
	I0818 19:08:49.122197  279709 status.go:330] multinode-190569 host status = "Running" (err=<nil>)
	I0818 19:08:49.122223  279709 host.go:66] Checking if "multinode-190569" exists ...
	I0818 19:08:49.122518  279709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-190569
	I0818 19:08:49.149386  279709 host.go:66] Checking if "multinode-190569" exists ...
	I0818 19:08:49.149702  279709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:08:49.149758  279709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-190569
	I0818 19:08:49.167137  279709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38459 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/multinode-190569/id_rsa Username:docker}
	I0818 19:08:49.260852  279709 ssh_runner.go:195] Run: systemctl --version
	I0818 19:08:49.265344  279709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:08:49.277581  279709 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 19:08:49.333274  279709 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-18 19:08:49.322984271 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 19:08:49.333865  279709 kubeconfig.go:125] found "multinode-190569" server: "https://192.168.67.2:8443"
	I0818 19:08:49.333901  279709 api_server.go:166] Checking apiserver status ...
	I0818 19:08:49.333955  279709 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:08:49.345343  279709 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1444/cgroup
	I0818 19:08:49.355696  279709 api_server.go:182] apiserver freezer: "6:freezer:/docker/3839b5aafe7dba7de2712d14db6a0f7f3aa4131d061af1f5f3c6fb8962e423b3/kubepods/burstable/pod8da9cc34c4203183269dd7f5e8916bf9/b5e731079ae2133e363d595914ccdf1a75f63127442eeec998ffd9fe5db00fc2"
	I0818 19:08:49.355765  279709 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/3839b5aafe7dba7de2712d14db6a0f7f3aa4131d061af1f5f3c6fb8962e423b3/kubepods/burstable/pod8da9cc34c4203183269dd7f5e8916bf9/b5e731079ae2133e363d595914ccdf1a75f63127442eeec998ffd9fe5db00fc2/freezer.state
	I0818 19:08:49.364805  279709 api_server.go:204] freezer state: "THAWED"
	I0818 19:08:49.364836  279709 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0818 19:08:49.372320  279709 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0818 19:08:49.372366  279709 status.go:422] multinode-190569 apiserver status = Running (err=<nil>)
	I0818 19:08:49.372380  279709 status.go:257] multinode-190569 status: &{Name:multinode-190569 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:08:49.372410  279709 status.go:255] checking status of multinode-190569-m02 ...
	I0818 19:08:49.372740  279709 cli_runner.go:164] Run: docker container inspect multinode-190569-m02 --format={{.State.Status}}
	I0818 19:08:49.394696  279709 status.go:330] multinode-190569-m02 host status = "Running" (err=<nil>)
	I0818 19:08:49.394721  279709 host.go:66] Checking if "multinode-190569-m02" exists ...
	I0818 19:08:49.395023  279709 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-190569-m02
	I0818 19:08:49.413033  279709 host.go:66] Checking if "multinode-190569-m02" exists ...
	I0818 19:08:49.413352  279709 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:08:49.413396  279709 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-190569-m02
	I0818 19:08:49.432151  279709 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:38464 SSHKeyPath:/home/jenkins/minikube-integration/19423-154159/.minikube/machines/multinode-190569-m02/id_rsa Username:docker}
	I0818 19:08:49.520967  279709 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:08:49.532646  279709 status.go:257] multinode-190569-m02 status: &{Name:multinode-190569-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:08:49.532684  279709 status.go:255] checking status of multinode-190569-m03 ...
	I0818 19:08:49.533044  279709 cli_runner.go:164] Run: docker container inspect multinode-190569-m03 --format={{.State.Status}}
	I0818 19:08:49.549311  279709 status.go:330] multinode-190569-m03 host status = "Stopped" (err=<nil>)
	I0818 19:08:49.549345  279709 status.go:343] host is not running, skipping remaining checks
	I0818 19:08:49.549353  279709 status.go:257] multinode-190569-m03 status: &{Name:multinode-190569-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-190569 node start m03 -v=7 --alsologtostderr: (8.83556685s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (91.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-190569
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-190569
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-190569: (24.996759084s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-190569 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-190569 --wait=true -v=8 --alsologtostderr: (1m6.315868589s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-190569
--- PASS: TestMultiNode/serial/RestartKeepsNodes (91.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-190569 node delete m03: (4.955633978s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-190569 stop: (23.756075176s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-190569 status: exit status 7 (312.262247ms)

                                                
                                                
-- stdout --
	multinode-190569
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-190569-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-190569 status --alsologtostderr: exit status 7 (163.067127ms)

                                                
                                                
-- stdout --
	multinode-190569
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-190569-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:11:00.404357  288173 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:11:00.404519  288173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:11:00.404530  288173 out.go:358] Setting ErrFile to fd 2...
	I0818 19:11:00.404537  288173 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:11:00.404791  288173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
	I0818 19:11:00.404990  288173 out.go:352] Setting JSON to false
	I0818 19:11:00.405033  288173 mustload.go:65] Loading cluster: multinode-190569
	I0818 19:11:00.405156  288173 notify.go:220] Checking for updates...
	I0818 19:11:00.405519  288173 config.go:182] Loaded profile config "multinode-190569": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0818 19:11:00.405541  288173 status.go:255] checking status of multinode-190569 ...
	I0818 19:11:00.406085  288173 cli_runner.go:164] Run: docker container inspect multinode-190569 --format={{.State.Status}}
	I0818 19:11:00.427338  288173 status.go:330] multinode-190569 host status = "Stopped" (err=<nil>)
	I0818 19:11:00.427364  288173 status.go:343] host is not running, skipping remaining checks
	I0818 19:11:00.427373  288173 status.go:257] multinode-190569 status: &{Name:multinode-190569 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:11:00.427421  288173 status.go:255] checking status of multinode-190569-m02 ...
	I0818 19:11:00.427760  288173 cli_runner.go:164] Run: docker container inspect multinode-190569-m02 --format={{.State.Status}}
	I0818 19:11:00.451926  288173 status.go:330] multinode-190569-m02 host status = "Stopped" (err=<nil>)
	I0818 19:11:00.451965  288173 status.go:343] host is not running, skipping remaining checks
	I0818 19:11:00.451972  288173 status.go:257] multinode-190569-m02 status: &{Name:multinode-190569-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-190569 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0818 19:11:07.079279  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-190569 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (50.277238287s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-190569 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.95s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-190569
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-190569-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-190569-m02 --driver=docker  --container-runtime=containerd: exit status 14 (89.303842ms)

                                                
                                                
-- stdout --
	* [multinode-190569-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-190569-m02' is duplicated with machine name 'multinode-190569-m02' in profile 'multinode-190569'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-190569-m03 --driver=docker  --container-runtime=containerd
E0818 19:11:54.941306  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-190569-m03 --driver=docker  --container-runtime=containerd: (32.271341642s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-190569
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-190569: exit status 80 (339.96777ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-190569 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-190569-m03 already exists in multinode-190569-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-190569-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-190569-m03: (1.959888203s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.71s)

                                                
                                    
x
+
TestPreload (116.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-416154 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-416154 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m15.919846603s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-416154 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-416154 image pull gcr.io/k8s-minikube/busybox: (1.183281503s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-416154
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-416154: (11.992920588s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-416154 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-416154 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (25.002007601s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-416154 image list
helpers_test.go:175: Cleaning up "test-preload-416154" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-416154
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-416154: (2.391627018s)
--- PASS: TestPreload (116.72s)

                                                
                                    
x
+
TestScheduledStopUnix (106.01s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-911794 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-911794 --memory=2048 --driver=docker  --container-runtime=containerd: (29.380055539s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-911794 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-911794 -n scheduled-stop-911794
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-911794 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-911794 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-911794 -n scheduled-stop-911794
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-911794
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-911794 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0818 19:16:07.079676  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-911794
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-911794: exit status 7 (66.746878ms)

                                                
                                                
-- stdout --
	scheduled-stop-911794
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-911794 -n scheduled-stop-911794
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-911794 -n scheduled-stop-911794: exit status 7 (66.995337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-911794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-911794
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-911794: (5.092384862s)
--- PASS: TestScheduledStopUnix (106.01s)

                                                
                                    
x
+
TestInsufficientStorage (10.11s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-153169 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-153169 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.620470705s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f488131f-0bf4-47c4-a291-6e746afe942f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-153169] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"24570b87-73d2-4f68-9b25-f467711786f0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"eb3a5489-8aa4-4f24-8d90-1fc5a5dd3afc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7fd8d9b2-7c51-49a2-bfe5-ee34e9506016","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig"}}
	{"specversion":"1.0","id":"7068e6c0-95af-4689-b13c-3043b18e11f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube"}}
	{"specversion":"1.0","id":"e42d457b-0e79-44fa-8959-8d33d8bcbb1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e2598a17-61b6-49d7-b892-131785379ab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cf8439c1-387f-45ca-8323-98f072bf0fb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"e8f1a984-a797-4d52-892b-ccae8c0db924","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c4f5515b-8d12-4dae-a5df-c6148d00a492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8e0b0f44-2daf-425f-ae9b-461cae9ff3f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f1ceb6fb-694a-4b7a-99fa-f41575281ca5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-153169\" primary control-plane node in \"insufficient-storage-153169\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"5f831d11-bb47-46ef-805a-ca85f1274e14","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1723740748-19452 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"26a79bb7-3702-459b-9970-4421c460fe78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2a56bc08-eab3-46f0-a418-e7362bf266db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-153169 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-153169 --output=json --layout=cluster: exit status 7 (293.377667ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-153169","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-153169","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 19:16:20.798491  306875 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-153169" does not appear in /home/jenkins/minikube-integration/19423-154159/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-153169 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-153169 --output=json --layout=cluster: exit status 7 (299.444793ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-153169","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-153169","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 19:16:21.099501  306939 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-153169" does not appear in /home/jenkins/minikube-integration/19423-154159/kubeconfig
	E0818 19:16:21.110692  306939 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/insufficient-storage-153169/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-153169" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-153169
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-153169: (1.894846584s)
--- PASS: TestInsufficientStorage (10.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (90.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E0818 19:21:07.079015  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2789922412 start -p running-upgrade-830325 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2789922412 start -p running-upgrade-830325 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.696907365s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-830325 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0818 19:21:54.940879  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-830325 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.344105114s)
helpers_test.go:175: Cleaning up "running-upgrade-830325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-830325
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-830325: (2.70822205s)
--- PASS: TestRunningBinaryUpgrade (90.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (347.79s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-246465 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-246465 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (58.497781152s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-246465
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-246465: (1.21627215s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-246465 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-246465 status --format={{.Host}}: exit status 7 (67.125827ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-246465 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-246465 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m38.992722158s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-246465 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-246465 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-246465 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (120.689601ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-246465] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-246465
	    minikube start -p kubernetes-upgrade-246465 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2464652 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-246465 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-246465 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-246465 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.530834519s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-246465" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-246465
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-246465: (2.185308011s)
--- PASS: TestKubernetesUpgrade (347.79s)

                                                
                                    
x
+
TestMissingContainerUpgrade (170.43s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1723983324 start -p missing-upgrade-295638 --memory=2200 --driver=docker  --container-runtime=containerd
E0818 19:16:54.940820  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1723983324 start -p missing-upgrade-295638 --memory=2200 --driver=docker  --container-runtime=containerd: (1m27.585992856s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-295638
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-295638
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-295638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-295638 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m18.994715175s)
helpers_test.go:175: Cleaning up "missing-upgrade-295638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-295638
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-295638: (2.268030447s)
--- PASS: TestMissingContainerUpgrade (170.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-710275 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-710275 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (73.678437ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-710275] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-710275 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-710275 --driver=docker  --container-runtime=containerd: (38.656486944s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-710275 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-710275 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-710275 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.893441944s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-710275 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-710275 status -o json: exit status 2 (287.93471ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-710275","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-710275
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-710275: (1.866137599s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-710275 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-710275 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.835533101s)
--- PASS: TestNoKubernetes/serial/Start (5.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-710275 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-710275 "sudo systemctl is-active --quiet service kubelet": exit status 1 (316.97941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-710275
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-710275: (1.260335987s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-710275 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-710275 --driver=docker  --container-runtime=containerd: (6.870276074s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-710275 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-710275 "sudo systemctl is-active --quiet service kubelet": exit status 1 (338.942203ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (108.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1619935405 start -p stopped-upgrade-305491 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1619935405 start -p stopped-upgrade-305491 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.342042342s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1619935405 -p stopped-upgrade-305491 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1619935405 -p stopped-upgrade-305491 stop: (20.063073976s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-305491 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-305491 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.545086612s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (108.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-305491
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-305491: (1.296284971s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestPause/serial/Start (65.79s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-565930 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-565930 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m5.786691781s)
--- PASS: TestPause/serial/Start (65.79s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-565930 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-565930 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.430679132s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.46s)

                                                
                                    
x
+
TestPause/serial/Pause (1.11s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-565930 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-565930 --alsologtostderr -v=5: (1.110623951s)
--- PASS: TestPause/serial/Pause (1.11s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-565930 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-565930 --output=json --layout=cluster: exit status 2 (488.759113ms)

                                                
                                                
-- stdout --
	{"Name":"pause-565930","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-565930","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.49s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.98s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-565930 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.98s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.19s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-565930 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-565930 --alsologtostderr -v=5: (1.192249887s)
--- PASS: TestPause/serial/PauseAgain (1.19s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.99s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-565930 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-565930 --alsologtostderr -v=5: (2.991715237s)
--- PASS: TestPause/serial/DeletePaused (2.99s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-565930
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-565930: exit status 1 (16.570323ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-565930: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-555882 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-555882 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (244.027349ms)

                                                
                                                
-- stdout --
	* [false-555882] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:24:03.442399  348658 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:24:03.442531  348658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:24:03.442558  348658 out.go:358] Setting ErrFile to fd 2...
	I0818 19:24:03.442563  348658 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:24:03.442786  348658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-154159/.minikube/bin
	I0818 19:24:03.443186  348658 out.go:352] Setting JSON to false
	I0818 19:24:03.444104  348658 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":101188,"bootTime":1723907856,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0818 19:24:03.444169  348658 start.go:139] virtualization:  
	I0818 19:24:03.448519  348658 out.go:177] * [false-555882] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0818 19:24:03.451104  348658 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 19:24:03.451166  348658 notify.go:220] Checking for updates...
	I0818 19:24:03.453843  348658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 19:24:03.456113  348658 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-154159/kubeconfig
	I0818 19:24:03.457949  348658 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-154159/.minikube
	I0818 19:24:03.459850  348658 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0818 19:24:03.462453  348658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 19:24:03.465269  348658 config.go:182] Loaded profile config "force-systemd-flag-451374": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0818 19:24:03.465429  348658 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 19:24:03.500096  348658 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0818 19:24:03.500214  348658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0818 19:24:03.582282  348658 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:41 SystemTime:2024-08-18 19:24:03.570304239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214896640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0818 19:24:03.582412  348658 docker.go:307] overlay module found
	I0818 19:24:03.586544  348658 out.go:177] * Using the docker driver based on user configuration
	I0818 19:24:03.588655  348658 start.go:297] selected driver: docker
	I0818 19:24:03.588676  348658 start.go:901] validating driver "docker" against <nil>
	I0818 19:24:03.588699  348658 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 19:24:03.592444  348658 out.go:201] 
	W0818 19:24:03.595225  348658 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0818 19:24:03.597904  348658 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-555882 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-555882

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-555882

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-555882

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-555882

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-555882

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-555882

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-555882

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-555882

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-555882

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-555882

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-555882

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-555882" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-555882" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-555882

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-555882"

                                                
                                                
----------------------- debugLogs end: false-555882 [took: 4.443578826s] --------------------------------
helpers_test.go:175: Cleaning up "false-555882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-555882
--- PASS: TestNetworkPlugins/group/false (4.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (151.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-216078 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0818 19:26:07.079071  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:26:54.941075  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-216078 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m31.779116465s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (151.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-216078 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3e229523-e0ed-4fe4-9d19-6f24bc6fdcfc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3e229523-e0ed-4fe4-9d19-6f24bc6fdcfc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.006139295s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-216078 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-216078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-216078 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.202626896s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-216078 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-216078 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-216078 --alsologtostderr -v=3: (13.115225156s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (68.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-091348 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-091348 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m8.281005642s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (68.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-216078 -n old-k8s-version-216078
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-216078 -n old-k8s-version-216078: exit status 7 (85.632397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-216078 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-091348 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [07212690-0308-405b-82dd-e91f04974107] Pending
helpers_test.go:344: "busybox" [07212690-0308-405b-82dd-e91f04974107] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [07212690-0308-405b-82dd-e91f04974107] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005414465s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-091348 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-091348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-091348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.008131908s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-091348 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-091348 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-091348 --alsologtostderr -v=3: (12.108434357s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-091348 -n no-preload-091348
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-091348 -n no-preload-091348: exit status 7 (72.084282ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-091348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-091348 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0818 19:31:07.079621  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:31:54.940555  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-091348 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m26.336635563s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-091348 -n no-preload-091348
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wqh64" [9b64466a-8fb6-499e-8179-bf21679de7fd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003765972s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wqh64" [9b64466a-8fb6-499e-8179-bf21679de7fd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004141793s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-091348 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-091348 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-091348 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-091348 -n no-preload-091348
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-091348 -n no-preload-091348: exit status 2 (324.218803ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-091348 -n no-preload-091348
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-091348 -n no-preload-091348: exit status 2 (331.31937ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-091348 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-091348 -n no-preload-091348
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-091348 -n no-preload-091348
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-568075 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-568075 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (55.313693975s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fmtrv" [0828e667-215d-43e7-89a6-e53edc091b32] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004452751s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-fmtrv" [0828e667-215d-43e7-89a6-e53edc091b32] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004902213s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-216078 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-216078 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-216078 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-216078 --alsologtostderr -v=1: (1.179706559s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-216078 -n old-k8s-version-216078
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-216078 -n old-k8s-version-216078: exit status 2 (462.246908ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-216078 -n old-k8s-version-216078
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-216078 -n old-k8s-version-216078: exit status 2 (528.051257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-216078 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-216078 --alsologtostderr -v=1: (1.141618328s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-216078 -n old-k8s-version-216078
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-216078 -n old-k8s-version-216078
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-509819 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-509819 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (55.781777061s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-568075 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7efb4399-f494-4433-ac9b-2079eb533214] Pending
helpers_test.go:344: "busybox" [7efb4399-f494-4433-ac9b-2079eb533214] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7efb4399-f494-4433-ac9b-2079eb533214] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003464458s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-568075 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-568075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-568075 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.398477025s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-568075 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-568075 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-568075 --alsologtostderr -v=3: (12.321990725s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-568075 -n embed-certs-568075
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-568075 -n embed-certs-568075: exit status 7 (75.561818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-568075 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (302.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-568075 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-568075 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (5m1.795802471s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-568075 -n embed-certs-568075
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (302.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-509819 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a402e041-8c92-4d18-a986-1a142fef439d] Pending
helpers_test.go:344: "busybox" [a402e041-8c92-4d18-a986-1a142fef439d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a402e041-8c92-4d18-a986-1a142fef439d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004198208s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-509819 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-509819 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-509819 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.301693482s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-509819 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-509819 --alsologtostderr -v=3
E0818 19:36:07.079251  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-509819 --alsologtostderr -v=3: (12.477379899s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-509819 -n default-k8s-diff-port-509819
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-509819 -n default-k8s-diff-port-509819: exit status 7 (74.152395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-509819 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-509819 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0818 19:36:54.941144  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:54.096030  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:54.102445  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:54.113820  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:54.135239  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:54.176720  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:54.258241  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:54.419751  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:54.741298  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:55.383141  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:56.665064  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:59.226484  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:38:04.348423  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:38:14.590820  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:38:35.072930  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:12.473802  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:12.480191  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:12.491659  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:12.513137  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:12.554612  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:12.636154  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:12.797778  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:13.119844  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:13.761564  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:15.043361  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:16.034813  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:17.605836  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:22.727621  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:32.969420  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:39:53.450852  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:40:34.412244  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-509819 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m25.974466908s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-509819 -n default-k8s-diff-port-509819
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8snmb" [b883c735-8d19-47a2-8901-41dea514a918] Running
E0818 19:40:37.956753  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003503981s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5p2xk" [bcb372d5-72fb-417a-a5c1-9c02612e318e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004092212s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8snmb" [b883c735-8d19-47a2-8901-41dea514a918] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003960775s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-509819 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5p2xk" [bcb372d5-72fb-417a-a5c1-9c02612e318e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008682693s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-568075 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-509819 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-509819 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-509819 -n default-k8s-diff-port-509819
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-509819 -n default-k8s-diff-port-509819: exit status 2 (328.473882ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-509819 -n default-k8s-diff-port-509819
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-509819 -n default-k8s-diff-port-509819: exit status 2 (308.460764ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-509819 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-509819 -n default-k8s-diff-port-509819
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-509819 -n default-k8s-diff-port-509819
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-568075 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-568075 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-568075 --alsologtostderr -v=1: (1.332362868s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-568075 -n embed-certs-568075
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-568075 -n embed-certs-568075: exit status 2 (539.40867ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-568075 -n embed-certs-568075
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-568075 -n embed-certs-568075: exit status 2 (440.924477ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-568075 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-568075 --alsologtostderr -v=1: (1.026768612s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-568075 -n embed-certs-568075
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-568075 -n embed-certs-568075
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-308781 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-308781 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (43.095035048s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (72.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0818 19:41:07.079459  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:41:38.024446  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m12.184834175s)
--- PASS: TestNetworkPlugins/group/auto/Start (72.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-308781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-308781 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.758137523s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-308781 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-308781 --alsologtostderr -v=3: (1.38864688s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-308781 -n newest-cni-308781
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-308781 -n newest-cni-308781: exit status 7 (117.650626ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-308781 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-308781 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0818 19:41:54.941378  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:41:56.334557  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-308781 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (15.794905709s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-308781 -n newest-cni-308781
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-308781 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-308781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-308781 -n newest-cni-308781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-308781 -n newest-cni-308781: exit status 2 (325.936644ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-308781 -n newest-cni-308781
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-308781 -n newest-cni-308781: exit status 2 (343.272734ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-308781 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-308781 -n newest-cni-308781
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-308781 -n newest-cni-308781
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.08s)
E0818 19:46:54.940832  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/addons-677874/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (58.092557033s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-555882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-555882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2s998" [083c2e9c-d745-4bef-92a1-8e697690fe30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2s998" [083c2e9c-d745-4bef-92a1-8e697690fe30] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.006083828s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-555882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (68.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0818 19:42:54.095995  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m8.409620585s)
--- PASS: TestNetworkPlugins/group/calico/Start (68.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-ftx4d" [0d001128-5a46-4811-9c89-86f5916f9562] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003980086s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-555882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-555882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9qmtl" [5e104bd4-4968-4552-892e-d8bf807aaf51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9qmtl" [5e104bd4-4968-4552-892e-d8bf807aaf51] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.003397374s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-555882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0818 19:43:21.798882  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/old-k8s-version-216078/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (52.932955159s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-z4k7x" [3ffeaeb8-0261-4c29-a4f8-1e82ea821332] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005734531s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-555882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-555882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-v94sz" [023f12a1-6403-4209-bad3-b6bbbe0c6b4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-v94sz" [023f12a1-6403-4209-bad3-b6bbbe0c6b4e] Running
E0818 19:44:12.473344  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.00641176s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-555882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (46.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (46.473388284s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (46.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-555882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-555882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vmdg2" [7690b390-0364-477f-b6e2-3b984bd12501] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0818 19:44:40.176695  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/no-preload-091348/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-vmdg2" [7690b390-0364-477f-b6e2-3b984bd12501] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004296803s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-555882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (55.146777786s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-555882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-555882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gq2x6" [cb12d8d3-cfd8-438b-9f83-043480821de3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gq2x6" [cb12d8d3-cfd8-438b-9f83-043480821de3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005231452s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-555882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (46.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0818 19:46:07.079759  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/functional-249969/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:09.755299  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/default-k8s-diff-port-509819/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-555882 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (46.844700172s)
--- PASS: TestNetworkPlugins/group/bridge/Start (46.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-q9pcw" [d510a40e-9694-4dac-b810-ef789483934e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006596845s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-555882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-555882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h6c77" [6475c92e-050f-447c-a5ae-b4351a28c734] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h6c77" [6475c92e-050f-447c-a5ae-b4351a28c734] Running
E0818 19:46:30.237055  159549 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/default-k8s-diff-port-509819/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004098086s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-555882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-555882 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-555882 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2g2j7" [c57df05c-ee9c-46bd-9b3f-d07f9d29759a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2g2j7" [c57df05c-ee9c-46bd-9b3f-d07f9d29759a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003341512s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-555882 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-555882 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-520748 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-520748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-520748
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-989700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-989700
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-555882 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-555882

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-555882

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-555882

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-555882

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-555882

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-555882

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-555882

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-555882

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-555882

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-555882

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-555882

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-555882" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-555882" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19423-154159/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 18 Aug 2024 19:23:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-451374
contexts:
- context:
cluster: force-systemd-flag-451374
extensions:
- extension:
last-update: Sun, 18 Aug 2024 19:23:58 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: context_info
namespace: default
user: force-systemd-flag-451374
name: force-systemd-flag-451374
current-context: force-systemd-flag-451374
kind: Config
preferences: {}
users:
- name: force-systemd-flag-451374
user:
client-certificate: /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/force-systemd-flag-451374/client.crt
client-key: /home/jenkins/minikube-integration/19423-154159/.minikube/profiles/force-systemd-flag-451374/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-555882

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-555882"

                                                
                                                
----------------------- debugLogs end: kubenet-555882 [took: 5.035560487s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-555882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-555882
--- SKIP: TestNetworkPlugins/group/kubenet (5.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-555882 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-555882" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-555882

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-555882" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-555882"

                                                
                                                
----------------------- debugLogs end: cilium-555882 [took: 5.504799164s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-555882" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-555882
--- SKIP: TestNetworkPlugins/group/cilium (5.69s)

                                                
                                    
Copied to clipboard