Test Report: Docker_Linux_containerd_arm64 19640

                    
                      e5b440675da001c9bcd97e7df406aef1ef05cbc8:2024-09-14:36202
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 200.82
300 TestStartStop/group/old-k8s-version/serial/SecondStart 376.3
x
+
TestAddons/serial/Volcano (200.82s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 49.655565ms
addons_test.go:905: volcano-admission stabilized in 49.731027ms
addons_test.go:913: volcano-controller stabilized in 49.776123ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-dj5ch" [b1b19b7a-ed39-4b82-93fb-20f77d3edcac] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004999698s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-spsmg" [072c6e1a-7d77-445f-b6ca-1172bfcde8f0] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003683068s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-7cvvq" [293022c5-4bd2-423e-bf07-413d0587dec2] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003596333s
addons_test.go:932: (dbg) Run:  kubectl --context addons-131319 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-131319 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-131319 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [d938520b-916d-4ebe-9178-71fe279e53dd] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-131319 -n addons-131319
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-14 00:28:47.688366271 +0000 UTC m=+435.268133226
addons_test.go:964: (dbg) Run:  kubectl --context addons-131319 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-131319 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-fcfe0b61-a9e2-4f00-9e50-298458a9e378
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6l25m (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-6l25m:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-131319 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-131319 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-131319
helpers_test.go:235: (dbg) docker inspect addons-131319:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "10f831698d93f854f8885156757e9bfb243afc16686dda2b77a7716f77ecfc77",
	        "Created": "2024-09-14T00:22:17.220131266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1461088,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-14T00:22:17.36413811Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fe3365929e6ce54b4c06f0bc3d1500dff08f535844ef4978f2c45cd67c542134",
	        "ResolvConfPath": "/var/lib/docker/containers/10f831698d93f854f8885156757e9bfb243afc16686dda2b77a7716f77ecfc77/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/10f831698d93f854f8885156757e9bfb243afc16686dda2b77a7716f77ecfc77/hostname",
	        "HostsPath": "/var/lib/docker/containers/10f831698d93f854f8885156757e9bfb243afc16686dda2b77a7716f77ecfc77/hosts",
	        "LogPath": "/var/lib/docker/containers/10f831698d93f854f8885156757e9bfb243afc16686dda2b77a7716f77ecfc77/10f831698d93f854f8885156757e9bfb243afc16686dda2b77a7716f77ecfc77-json.log",
	        "Name": "/addons-131319",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-131319:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-131319",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1fb11a7bfe4b2979a8761a46be0744b41bd509cddd1292ad10a3821118463645-init/diff:/var/lib/docker/overlay2/6c8a90774455b3f13d96b15ce5fd57cf56a284df68ee1777efc5fdfa6d28e51f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1fb11a7bfe4b2979a8761a46be0744b41bd509cddd1292ad10a3821118463645/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1fb11a7bfe4b2979a8761a46be0744b41bd509cddd1292ad10a3821118463645/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1fb11a7bfe4b2979a8761a46be0744b41bd509cddd1292ad10a3821118463645/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-131319",
	                "Source": "/var/lib/docker/volumes/addons-131319/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-131319",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-131319",
	                "name.minikube.sigs.k8s.io": "addons-131319",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "989c193c417b2dd2fe55a2230307898f0f922e1d1a5935edef6f4ef83d1088cc",
	            "SandboxKey": "/var/run/docker/netns/989c193c417b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34624"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34625"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34628"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34626"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34627"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-131319": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ffd462664e1eee62320fbca4a2a28ac15b273f6f2593a4cfe3f759f824385405",
	                    "EndpointID": "2d9a102b0c7e7806c3f52e88a63e80557b2bdcaa52d73627f6161ca89352e399",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-131319",
	                        "10f831698d93"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-131319 -n addons-131319
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-131319 logs -n 25: (1.567083598s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-815538   | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC |                     |
	|         | -p download-only-815538              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC | 14 Sep 24 00:21 UTC |
	| delete  | -p download-only-815538              | download-only-815538   | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC | 14 Sep 24 00:21 UTC |
	| start   | -o=json --download-only              | download-only-512994   | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC |                     |
	|         | -p download-only-512994              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC | 14 Sep 24 00:21 UTC |
	| delete  | -p download-only-512994              | download-only-512994   | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC | 14 Sep 24 00:21 UTC |
	| delete  | -p download-only-815538              | download-only-815538   | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC | 14 Sep 24 00:21 UTC |
	| delete  | -p download-only-512994              | download-only-512994   | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC | 14 Sep 24 00:21 UTC |
	| start   | --download-only -p                   | download-docker-399396 | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC |                     |
	|         | download-docker-399396               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-399396            | download-docker-399396 | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC | 14 Sep 24 00:21 UTC |
	| start   | --download-only -p                   | binary-mirror-987653   | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC |                     |
	|         | binary-mirror-987653                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34241               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-987653              | binary-mirror-987653   | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC | 14 Sep 24 00:21 UTC |
	| addons  | disable dashboard -p                 | addons-131319          | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC |                     |
	|         | addons-131319                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-131319          | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC |                     |
	|         | addons-131319                        |                        |         |         |                     |                     |
	| start   | -p addons-131319 --wait=true         | addons-131319          | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC | 14 Sep 24 00:25 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:21:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:21:53.062093 1460604 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:21:53.062312 1460604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:21:53.062323 1460604 out.go:358] Setting ErrFile to fd 2...
	I0914 00:21:53.062329 1460604 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:21:53.062595 1460604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
	I0914 00:21:53.063079 1460604 out.go:352] Setting JSON to false
	I0914 00:21:53.064095 1460604 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29060,"bootTime":1726244253,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 00:21:53.064177 1460604 start.go:139] virtualization:  
	I0914 00:21:53.067303 1460604 out.go:177] * [addons-131319] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 00:21:53.070583 1460604 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:21:53.070668 1460604 notify.go:220] Checking for updates...
	I0914 00:21:53.075897 1460604 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:21:53.078345 1460604 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 00:21:53.081158 1460604 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	I0914 00:21:53.083799 1460604 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 00:21:53.086293 1460604 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:21:53.089090 1460604 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:21:53.118294 1460604 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 00:21:53.118425 1460604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:21:53.176313 1460604 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 00:21:53.165529481 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:21:53.176443 1460604 docker.go:318] overlay module found
	I0914 00:21:53.179337 1460604 out.go:177] * Using the docker driver based on user configuration
	I0914 00:21:53.182028 1460604 start.go:297] selected driver: docker
	I0914 00:21:53.182051 1460604 start.go:901] validating driver "docker" against <nil>
	I0914 00:21:53.182067 1460604 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:21:53.182702 1460604 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:21:53.234980 1460604 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 00:21:53.225488846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:21:53.235202 1460604 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:21:53.235438 1460604 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:21:53.238178 1460604 out.go:177] * Using Docker driver with root privileges
	I0914 00:21:53.240668 1460604 cni.go:84] Creating CNI manager for ""
	I0914 00:21:53.240736 1460604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 00:21:53.240750 1460604 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 00:21:53.240853 1460604 start.go:340] cluster config:
	{Name:addons-131319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-131319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:21:53.244034 1460604 out.go:177] * Starting "addons-131319" primary control-plane node in "addons-131319" cluster
	I0914 00:21:53.246562 1460604 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0914 00:21:53.249416 1460604 out.go:177] * Pulling base image v0.0.45-1726243947-19640 ...
	I0914 00:21:53.251991 1460604 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 00:21:53.252052 1460604 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0914 00:21:53.252066 1460604 cache.go:56] Caching tarball of preloaded images
	I0914 00:21:53.252079 1460604 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 00:21:53.252150 1460604 preload.go:172] Found /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 00:21:53.252160 1460604 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0914 00:21:53.252525 1460604 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/config.json ...
	I0914 00:21:53.252558 1460604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/config.json: {Name:mke531f6e527668a345c20dba6e535e253b6735e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:21:53.266857 1460604 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 00:21:53.266970 1460604 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 00:21:53.266990 1460604 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0914 00:21:53.266995 1460604 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0914 00:21:53.267003 1460604 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0914 00:21:53.267008 1460604 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from local cache
	I0914 00:22:10.854550 1460604 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from cached tarball
	I0914 00:22:10.854593 1460604 cache.go:194] Successfully downloaded all kic artifacts
	I0914 00:22:10.854624 1460604 start.go:360] acquireMachinesLock for addons-131319: {Name:mk6279d16588c6b6e55d46ee8bb80a3f52ae4419 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 00:22:10.854758 1460604 start.go:364] duration metric: took 115.889µs to acquireMachinesLock for "addons-131319"
	I0914 00:22:10.854787 1460604 start.go:93] Provisioning new machine with config: &{Name:addons-131319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-131319 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 00:22:10.854869 1460604 start.go:125] createHost starting for "" (driver="docker")
	I0914 00:22:10.857012 1460604 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0914 00:22:10.857294 1460604 start.go:159] libmachine.API.Create for "addons-131319" (driver="docker")
	I0914 00:22:10.857334 1460604 client.go:168] LocalClient.Create starting
	I0914 00:22:10.857461 1460604 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem
	I0914 00:22:11.027287 1460604 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/cert.pem
	I0914 00:22:11.344061 1460604 cli_runner.go:164] Run: docker network inspect addons-131319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 00:22:11.359312 1460604 cli_runner.go:211] docker network inspect addons-131319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 00:22:11.359406 1460604 network_create.go:284] running [docker network inspect addons-131319] to gather additional debugging logs...
	I0914 00:22:11.359429 1460604 cli_runner.go:164] Run: docker network inspect addons-131319
	W0914 00:22:11.375634 1460604 cli_runner.go:211] docker network inspect addons-131319 returned with exit code 1
	I0914 00:22:11.375669 1460604 network_create.go:287] error running [docker network inspect addons-131319]: docker network inspect addons-131319: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-131319 not found
	I0914 00:22:11.375682 1460604 network_create.go:289] output of [docker network inspect addons-131319]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-131319 not found
	
	** /stderr **
	I0914 00:22:11.375781 1460604 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 00:22:11.391644 1460604 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a41110}
	I0914 00:22:11.391687 1460604 network_create.go:124] attempt to create docker network addons-131319 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0914 00:22:11.391750 1460604 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-131319 addons-131319
	I0914 00:22:11.464078 1460604 network_create.go:108] docker network addons-131319 192.168.49.0/24 created
	I0914 00:22:11.464111 1460604 kic.go:121] calculated static IP "192.168.49.2" for the "addons-131319" container
	I0914 00:22:11.464196 1460604 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 00:22:11.478405 1460604 cli_runner.go:164] Run: docker volume create addons-131319 --label name.minikube.sigs.k8s.io=addons-131319 --label created_by.minikube.sigs.k8s.io=true
	I0914 00:22:11.495789 1460604 oci.go:103] Successfully created a docker volume addons-131319
	I0914 00:22:11.495930 1460604 cli_runner.go:164] Run: docker run --rm --name addons-131319-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-131319 --entrypoint /usr/bin/test -v addons-131319:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -d /var/lib
	I0914 00:22:13.137014 1460604 cli_runner.go:217] Completed: docker run --rm --name addons-131319-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-131319 --entrypoint /usr/bin/test -v addons-131319:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -d /var/lib: (1.641040343s)
	I0914 00:22:13.137043 1460604 oci.go:107] Successfully prepared a docker volume addons-131319
	I0914 00:22:13.137087 1460604 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 00:22:13.137116 1460604 kic.go:194] Starting extracting preloaded images to volume ...
	I0914 00:22:13.137182 1460604 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-131319:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 00:22:17.143396 1460604 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-131319:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -I lz4 -xf /preloaded.tar -C /extractDir: (4.00617208s)
	I0914 00:22:17.143430 1460604 kic.go:203] duration metric: took 4.006312214s to extract preloaded images to volume ...
	W0914 00:22:17.143567 1460604 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 00:22:17.143936 1460604 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 00:22:17.206795 1460604 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-131319 --name addons-131319 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-131319 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-131319 --network addons-131319 --ip 192.168.49.2 --volume addons-131319:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243
	I0914 00:22:17.533924 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Running}}
	I0914 00:22:17.566131 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:17.586742 1460604 cli_runner.go:164] Run: docker exec addons-131319 stat /var/lib/dpkg/alternatives/iptables
	I0914 00:22:17.659696 1460604 oci.go:144] the created container "addons-131319" has a running status.
	I0914 00:22:17.659726 1460604 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa...
	I0914 00:22:18.066192 1460604 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 00:22:18.103667 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:18.128900 1460604 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 00:22:18.128927 1460604 kic_runner.go:114] Args: [docker exec --privileged addons-131319 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 00:22:18.193520 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:18.222825 1460604 machine.go:93] provisionDockerMachine start ...
	I0914 00:22:18.222920 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:18.250412 1460604 main.go:141] libmachine: Using SSH client type: native
	I0914 00:22:18.250674 1460604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34624 <nil> <nil>}
	I0914 00:22:18.250684 1460604 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 00:22:18.424254 1460604 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-131319
	
	I0914 00:22:18.424345 1460604 ubuntu.go:169] provisioning hostname "addons-131319"
	I0914 00:22:18.424449 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:18.446563 1460604 main.go:141] libmachine: Using SSH client type: native
	I0914 00:22:18.446811 1460604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34624 <nil> <nil>}
	I0914 00:22:18.446823 1460604 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-131319 && echo "addons-131319" | sudo tee /etc/hostname
	I0914 00:22:18.589610 1460604 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-131319
	
	I0914 00:22:18.589775 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:18.611737 1460604 main.go:141] libmachine: Using SSH client type: native
	I0914 00:22:18.612027 1460604 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34624 <nil> <nil>}
	I0914 00:22:18.612045 1460604 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-131319' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-131319/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-131319' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 00:22:18.743696 1460604 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 00:22:18.743725 1460604 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19640-1454467/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-1454467/.minikube}
	I0914 00:22:18.743798 1460604 ubuntu.go:177] setting up certificates
	I0914 00:22:18.743808 1460604 provision.go:84] configureAuth start
	I0914 00:22:18.743899 1460604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-131319
	I0914 00:22:18.759800 1460604 provision.go:143] copyHostCerts
	I0914 00:22:18.759927 1460604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.pem (1078 bytes)
	I0914 00:22:18.760061 1460604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-1454467/.minikube/cert.pem (1123 bytes)
	I0914 00:22:18.760120 1460604 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-1454467/.minikube/key.pem (1679 bytes)
	I0914 00:22:18.760174 1460604 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca-key.pem org=jenkins.addons-131319 san=[127.0.0.1 192.168.49.2 addons-131319 localhost minikube]
	I0914 00:22:18.977784 1460604 provision.go:177] copyRemoteCerts
	I0914 00:22:18.977850 1460604 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 00:22:18.977895 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:18.994065 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:19.084822 1460604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 00:22:19.108928 1460604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 00:22:19.132564 1460604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 00:22:19.156104 1460604 provision.go:87] duration metric: took 412.281621ms to configureAuth
	I0914 00:22:19.156176 1460604 ubuntu.go:193] setting minikube options for container-runtime
	I0914 00:22:19.156396 1460604 config.go:182] Loaded profile config "addons-131319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 00:22:19.156408 1460604 machine.go:96] duration metric: took 933.561347ms to provisionDockerMachine
	I0914 00:22:19.156416 1460604 client.go:171] duration metric: took 8.299072625s to LocalClient.Create
	I0914 00:22:19.156436 1460604 start.go:167] duration metric: took 8.299143788s to libmachine.API.Create "addons-131319"
	I0914 00:22:19.156444 1460604 start.go:293] postStartSetup for "addons-131319" (driver="docker")
	I0914 00:22:19.156457 1460604 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 00:22:19.156509 1460604 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 00:22:19.156556 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:19.172647 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:19.260739 1460604 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 00:22:19.263765 1460604 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 00:22:19.263804 1460604 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 00:22:19.263819 1460604 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 00:22:19.263826 1460604 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0914 00:22:19.263836 1460604 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-1454467/.minikube/addons for local assets ...
	I0914 00:22:19.263922 1460604 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-1454467/.minikube/files for local assets ...
	I0914 00:22:19.263952 1460604 start.go:296] duration metric: took 107.498974ms for postStartSetup
	I0914 00:22:19.264268 1460604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-131319
	I0914 00:22:19.280099 1460604 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/config.json ...
	I0914 00:22:19.280415 1460604 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:22:19.280466 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:19.296323 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:19.380438 1460604 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 00:22:19.384900 1460604 start.go:128] duration metric: took 8.530014469s to createHost
	I0914 00:22:19.384927 1460604 start.go:83] releasing machines lock for "addons-131319", held for 8.530159068s
	I0914 00:22:19.385018 1460604 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-131319
	I0914 00:22:19.400877 1460604 ssh_runner.go:195] Run: cat /version.json
	I0914 00:22:19.400931 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:19.400992 1460604 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 00:22:19.401052 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:19.420362 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:19.437711 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:19.635814 1460604 ssh_runner.go:195] Run: systemctl --version
	I0914 00:22:19.640029 1460604 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 00:22:19.644008 1460604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0914 00:22:19.668356 1460604 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0914 00:22:19.668433 1460604 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 00:22:19.697412 1460604 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 00:22:19.697438 1460604 start.go:495] detecting cgroup driver to use...
	I0914 00:22:19.697496 1460604 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 00:22:19.697573 1460604 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 00:22:19.709890 1460604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 00:22:19.721854 1460604 docker.go:217] disabling cri-docker service (if available) ...
	I0914 00:22:19.721942 1460604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 00:22:19.736186 1460604 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 00:22:19.750924 1460604 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 00:22:19.843133 1460604 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 00:22:19.945682 1460604 docker.go:233] disabling docker service ...
	I0914 00:22:19.945796 1460604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 00:22:19.965124 1460604 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 00:22:19.978157 1460604 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 00:22:20.077581 1460604 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 00:22:20.187802 1460604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 00:22:20.201464 1460604 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 00:22:20.218188 1460604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0914 00:22:20.229334 1460604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 00:22:20.240301 1460604 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 00:22:20.240419 1460604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 00:22:20.251542 1460604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 00:22:20.262817 1460604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 00:22:20.276047 1460604 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 00:22:20.286221 1460604 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 00:22:20.295306 1460604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 00:22:20.305669 1460604 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 00:22:20.315736 1460604 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 00:22:20.326341 1460604 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 00:22:20.334765 1460604 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 00:22:20.343270 1460604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:22:20.444282 1460604 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 00:22:20.586126 1460604 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0914 00:22:20.586267 1460604 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0914 00:22:20.589892 1460604 start.go:563] Will wait 60s for crictl version
	I0914 00:22:20.590001 1460604 ssh_runner.go:195] Run: which crictl
	I0914 00:22:20.593347 1460604 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 00:22:20.630712 1460604 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0914 00:22:20.630843 1460604 ssh_runner.go:195] Run: containerd --version
	I0914 00:22:20.652948 1460604 ssh_runner.go:195] Run: containerd --version
	I0914 00:22:20.678454 1460604 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0914 00:22:20.680216 1460604 cli_runner.go:164] Run: docker network inspect addons-131319 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 00:22:20.696172 1460604 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 00:22:20.699827 1460604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:22:20.710345 1460604 kubeadm.go:883] updating cluster {Name:addons-131319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-131319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 00:22:20.710483 1460604 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 00:22:20.710548 1460604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:22:20.746612 1460604 containerd.go:627] all images are preloaded for containerd runtime.
	I0914 00:22:20.746637 1460604 containerd.go:534] Images already preloaded, skipping extraction
	I0914 00:22:20.746699 1460604 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 00:22:20.782309 1460604 containerd.go:627] all images are preloaded for containerd runtime.
	I0914 00:22:20.782336 1460604 cache_images.go:84] Images are preloaded, skipping loading
	I0914 00:22:20.782345 1460604 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0914 00:22:20.782437 1460604 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-131319 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-131319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 00:22:20.782509 1460604 ssh_runner.go:195] Run: sudo crictl info
	I0914 00:22:20.824794 1460604 cni.go:84] Creating CNI manager for ""
	I0914 00:22:20.824824 1460604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 00:22:20.824833 1460604 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 00:22:20.824888 1460604 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-131319 NodeName:addons-131319 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 00:22:20.825060 1460604 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-131319"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 00:22:20.825137 1460604 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 00:22:20.834062 1460604 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 00:22:20.834191 1460604 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 00:22:20.843135 1460604 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0914 00:22:20.861157 1460604 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 00:22:20.879376 1460604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0914 00:22:20.899413 1460604 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 00:22:20.902807 1460604 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 00:22:20.913847 1460604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:22:21.025832 1460604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:22:21.042783 1460604 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319 for IP: 192.168.49.2
	I0914 00:22:21.042863 1460604 certs.go:194] generating shared ca certs ...
	I0914 00:22:21.042897 1460604 certs.go:226] acquiring lock for ca certs: {Name:mkfaf13a8785cc44d16a85b8163136271bcd698b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:21.043066 1460604 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.key
	I0914 00:22:21.391542 1460604 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.crt ...
	I0914 00:22:21.391578 1460604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.crt: {Name:mk957f12523c408a8851b66851dde65727215012 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:21.392170 1460604 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.key ...
	I0914 00:22:21.392188 1460604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.key: {Name:mk4a28a0900393538b29cf5bb418999da9093075 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:21.392586 1460604 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.key
	I0914 00:22:21.676981 1460604 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.crt ...
	I0914 00:22:21.677015 1460604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.crt: {Name:mkb2e798a60c086807a68a877c580ab6a2205bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:21.677198 1460604 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.key ...
	I0914 00:22:21.677211 1460604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.key: {Name:mk9784f09679493cc1baa9a88e7cb27f3d97181e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:21.677792 1460604 certs.go:256] generating profile certs ...
	I0914 00:22:21.677862 1460604 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.key
	I0914 00:22:21.677880 1460604 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt with IP's: []
	I0914 00:22:21.922467 1460604 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt ...
	I0914 00:22:21.922504 1460604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: {Name:mk58d6cddde5f8a0d8bb8794487e9cdfe3c0de70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:21.923120 1460604 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.key ...
	I0914 00:22:21.923138 1460604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.key: {Name:mkbe15adfdf6ec8b5e74cb9e621b1819b31d3cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:21.923634 1460604 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/apiserver.key.02ec250f
	I0914 00:22:21.923663 1460604 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/apiserver.crt.02ec250f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0914 00:22:22.241139 1460604 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/apiserver.crt.02ec250f ...
	I0914 00:22:22.241179 1460604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/apiserver.crt.02ec250f: {Name:mk5280abf928c1711235e18ff5acb9a296dbae86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:22.241370 1460604 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/apiserver.key.02ec250f ...
	I0914 00:22:22.241385 1460604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/apiserver.key.02ec250f: {Name:mk53e748b85d70041789a427299b4e3149e1ed68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:22.241852 1460604 certs.go:381] copying /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/apiserver.crt.02ec250f -> /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/apiserver.crt
	I0914 00:22:22.241943 1460604 certs.go:385] copying /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/apiserver.key.02ec250f -> /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/apiserver.key
	I0914 00:22:22.242001 1460604 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/proxy-client.key
	I0914 00:22:22.242022 1460604 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/proxy-client.crt with IP's: []
	I0914 00:22:22.946430 1460604 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/proxy-client.crt ...
	I0914 00:22:22.946469 1460604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/proxy-client.crt: {Name:mkaeb7e58802c3d1b3dec346b3e41e40558e4268 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:22.947262 1460604 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/proxy-client.key ...
	I0914 00:22:22.947295 1460604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/proxy-client.key: {Name:mk53aa1d7f2e2bb32f498891a97e8ef0f60e426a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:22.947523 1460604 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 00:22:22.947568 1460604 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem (1078 bytes)
	I0914 00:22:22.947601 1460604 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/cert.pem (1123 bytes)
	I0914 00:22:22.947634 1460604 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/key.pem (1679 bytes)
	I0914 00:22:22.948260 1460604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 00:22:22.978665 1460604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 00:22:23.006980 1460604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 00:22:23.039889 1460604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 00:22:23.065285 1460604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 00:22:23.089917 1460604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 00:22:23.113693 1460604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 00:22:23.137915 1460604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0914 00:22:23.162196 1460604 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 00:22:23.189664 1460604 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 00:22:23.207947 1460604 ssh_runner.go:195] Run: openssl version
	I0914 00:22:23.213635 1460604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 00:22:23.223213 1460604 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:22:23.226824 1460604 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 00:22 /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:22:23.226901 1460604 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 00:22:23.234000 1460604 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 00:22:23.243695 1460604 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 00:22:23.247078 1460604 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 00:22:23.247142 1460604 kubeadm.go:392] StartCluster: {Name:addons-131319 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-131319 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:22:23.247234 1460604 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0914 00:22:23.247356 1460604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 00:22:23.293164 1460604 cri.go:89] found id: ""
	I0914 00:22:23.293241 1460604 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 00:22:23.303780 1460604 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 00:22:23.313157 1460604 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0914 00:22:23.313224 1460604 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 00:22:23.323547 1460604 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 00:22:23.323565 1460604 kubeadm.go:157] found existing configuration files:
	
	I0914 00:22:23.323620 1460604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 00:22:23.332950 1460604 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 00:22:23.333017 1460604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 00:22:23.342745 1460604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 00:22:23.351739 1460604 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 00:22:23.351828 1460604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 00:22:23.360655 1460604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 00:22:23.369456 1460604 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 00:22:23.369517 1460604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 00:22:23.377867 1460604 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 00:22:23.386384 1460604 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 00:22:23.386449 1460604 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 00:22:23.395699 1460604 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 00:22:23.436878 1460604 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 00:22:23.436940 1460604 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 00:22:23.454653 1460604 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0914 00:22:23.454733 1460604 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0914 00:22:23.454774 1460604 kubeadm.go:310] OS: Linux
	I0914 00:22:23.454828 1460604 kubeadm.go:310] CGROUPS_CPU: enabled
	I0914 00:22:23.454885 1460604 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0914 00:22:23.454936 1460604 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0914 00:22:23.454988 1460604 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0914 00:22:23.455040 1460604 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0914 00:22:23.455091 1460604 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0914 00:22:23.455141 1460604 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0914 00:22:23.455192 1460604 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0914 00:22:23.455242 1460604 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0914 00:22:23.521892 1460604 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 00:22:23.522010 1460604 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 00:22:23.522107 1460604 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 00:22:23.536198 1460604 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 00:22:23.539909 1460604 out.go:235]   - Generating certificates and keys ...
	I0914 00:22:23.540137 1460604 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 00:22:23.540250 1460604 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 00:22:24.127547 1460604 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 00:22:24.304999 1460604 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 00:22:24.562360 1460604 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 00:22:25.841722 1460604 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 00:22:26.824143 1460604 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 00:22:26.824297 1460604 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-131319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 00:22:28.670345 1460604 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 00:22:28.670684 1460604 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-131319 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 00:22:28.895643 1460604 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 00:22:29.553790 1460604 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 00:22:30.087798 1460604 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 00:22:30.088406 1460604 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 00:22:30.338888 1460604 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 00:22:31.249662 1460604 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 00:22:32.237570 1460604 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 00:22:32.577559 1460604 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 00:22:32.870601 1460604 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 00:22:32.871435 1460604 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 00:22:32.874653 1460604 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 00:22:32.877184 1460604 out.go:235]   - Booting up control plane ...
	I0914 00:22:32.877302 1460604 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 00:22:32.877401 1460604 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 00:22:32.878721 1460604 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 00:22:32.890448 1460604 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 00:22:32.896818 1460604 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 00:22:32.899753 1460604 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 00:22:33.010900 1460604 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 00:22:33.011025 1460604 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 00:22:34.511550 1460604 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.500785069s
	I0914 00:22:34.511639 1460604 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 00:22:40.514065 1460604 kubeadm.go:310] [api-check] The API server is healthy after 6.002476886s
	I0914 00:22:40.533313 1460604 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 00:22:40.546934 1460604 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 00:22:40.576634 1460604 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 00:22:40.576833 1460604 kubeadm.go:310] [mark-control-plane] Marking the node addons-131319 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 00:22:40.596835 1460604 kubeadm.go:310] [bootstrap-token] Using token: undrax.q2k685wtq571urxh
	I0914 00:22:40.598763 1460604 out.go:235]   - Configuring RBAC rules ...
	I0914 00:22:40.598891 1460604 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 00:22:40.609198 1460604 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 00:22:40.619027 1460604 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 00:22:40.624048 1460604 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 00:22:40.631107 1460604 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 00:22:40.635048 1460604 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 00:22:40.921149 1460604 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 00:22:41.346479 1460604 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 00:22:41.920828 1460604 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 00:22:41.921975 1460604 kubeadm.go:310] 
	I0914 00:22:41.922061 1460604 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 00:22:41.922073 1460604 kubeadm.go:310] 
	I0914 00:22:41.922152 1460604 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 00:22:41.922162 1460604 kubeadm.go:310] 
	I0914 00:22:41.922188 1460604 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 00:22:41.922251 1460604 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 00:22:41.922307 1460604 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 00:22:41.922314 1460604 kubeadm.go:310] 
	I0914 00:22:41.922370 1460604 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 00:22:41.922377 1460604 kubeadm.go:310] 
	I0914 00:22:41.922426 1460604 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 00:22:41.922434 1460604 kubeadm.go:310] 
	I0914 00:22:41.922486 1460604 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 00:22:41.922566 1460604 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 00:22:41.922640 1460604 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 00:22:41.922648 1460604 kubeadm.go:310] 
	I0914 00:22:41.922733 1460604 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 00:22:41.922815 1460604 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 00:22:41.922824 1460604 kubeadm.go:310] 
	I0914 00:22:41.922909 1460604 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token undrax.q2k685wtq571urxh \
	I0914 00:22:41.923016 1460604 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:293204f33a45133f182fdacfd2c933cd65b2ce5eca026bc573a790bbd3fda2af \
	I0914 00:22:41.923041 1460604 kubeadm.go:310] 	--control-plane 
	I0914 00:22:41.923049 1460604 kubeadm.go:310] 
	I0914 00:22:41.923135 1460604 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 00:22:41.923141 1460604 kubeadm.go:310] 
	I0914 00:22:41.923224 1460604 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token undrax.q2k685wtq571urxh \
	I0914 00:22:41.923336 1460604 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:293204f33a45133f182fdacfd2c933cd65b2ce5eca026bc573a790bbd3fda2af 
	I0914 00:22:41.927785 1460604 kubeadm.go:310] W0914 00:22:23.432674    1020 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:22:41.928145 1460604 kubeadm.go:310] W0914 00:22:23.434349    1020 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 00:22:41.928416 1460604 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0914 00:22:41.928562 1460604 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 00:22:41.928589 1460604 cni.go:84] Creating CNI manager for ""
	I0914 00:22:41.928602 1460604 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 00:22:41.931536 1460604 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 00:22:41.933182 1460604 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 00:22:41.936936 1460604 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0914 00:22:41.936958 1460604 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0914 00:22:41.956904 1460604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 00:22:42.283933 1460604 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 00:22:42.284003 1460604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:22:42.284088 1460604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-131319 minikube.k8s.io/updated_at=2024_09_14T00_22_42_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=addons-131319 minikube.k8s.io/primary=true
	I0914 00:22:42.497100 1460604 ops.go:34] apiserver oom_adj: -16
	I0914 00:22:42.497213 1460604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:22:42.997373 1460604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:22:43.497275 1460604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:22:43.997775 1460604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:22:44.498177 1460604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:22:44.998115 1460604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:22:45.497362 1460604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:22:45.997685 1460604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:22:46.498301 1460604 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 00:22:46.603973 1460604 kubeadm.go:1113] duration metric: took 4.320027579s to wait for elevateKubeSystemPrivileges
	I0914 00:22:46.604007 1460604 kubeadm.go:394] duration metric: took 23.356888137s to StartCluster
	I0914 00:22:46.604025 1460604 settings.go:142] acquiring lock: {Name:mk71d0962f5f4196c9fea75fe9a601467858166a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:46.604154 1460604 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 00:22:46.604534 1460604 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/kubeconfig: {Name:mk9726361d7deb93fbb6dba7857cc3f0a8a02233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:22:46.604730 1460604 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 00:22:46.604878 1460604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 00:22:46.605118 1460604 config.go:182] Loaded profile config "addons-131319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 00:22:46.605157 1460604 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0914 00:22:46.605239 1460604 addons.go:69] Setting yakd=true in profile "addons-131319"
	I0914 00:22:46.605255 1460604 addons.go:234] Setting addon yakd=true in "addons-131319"
	I0914 00:22:46.605278 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.605772 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.606353 1460604 addons.go:69] Setting metrics-server=true in profile "addons-131319"
	I0914 00:22:46.606374 1460604 addons.go:234] Setting addon metrics-server=true in "addons-131319"
	I0914 00:22:46.606410 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.606418 1460604 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-131319"
	I0914 00:22:46.606435 1460604 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-131319"
	I0914 00:22:46.606457 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.606851 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.606896 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.609209 1460604 addons.go:69] Setting registry=true in profile "addons-131319"
	I0914 00:22:46.609245 1460604 addons.go:234] Setting addon registry=true in "addons-131319"
	I0914 00:22:46.609284 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.609745 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.610647 1460604 addons.go:69] Setting cloud-spanner=true in profile "addons-131319"
	I0914 00:22:46.610716 1460604 addons.go:234] Setting addon cloud-spanner=true in "addons-131319"
	I0914 00:22:46.610809 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.610951 1460604 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-131319"
	I0914 00:22:46.613084 1460604 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-131319"
	I0914 00:22:46.613125 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.613528 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.615280 1460604 addons.go:69] Setting storage-provisioner=true in profile "addons-131319"
	I0914 00:22:46.615354 1460604 addons.go:234] Setting addon storage-provisioner=true in "addons-131319"
	I0914 00:22:46.615405 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.615985 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.610961 1460604 addons.go:69] Setting default-storageclass=true in profile "addons-131319"
	I0914 00:22:46.622378 1460604 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-131319"
	I0914 00:22:46.622720 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.635497 1460604 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-131319"
	I0914 00:22:46.635593 1460604 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-131319"
	I0914 00:22:46.636143 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.610966 1460604 addons.go:69] Setting gcp-auth=true in profile "addons-131319"
	I0914 00:22:46.638358 1460604 mustload.go:65] Loading cluster: addons-131319
	I0914 00:22:46.638558 1460604 config.go:182] Loaded profile config "addons-131319": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 00:22:46.638809 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.610970 1460604 addons.go:69] Setting ingress=true in profile "addons-131319"
	I0914 00:22:46.667833 1460604 addons.go:234] Setting addon ingress=true in "addons-131319"
	I0914 00:22:46.667950 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.671592 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.681920 1460604 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0914 00:22:46.684336 1460604 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 00:22:46.684423 1460604 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 00:22:46.684523 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:46.610973 1460604 addons.go:69] Setting ingress-dns=true in profile "addons-131319"
	I0914 00:22:46.704865 1460604 addons.go:234] Setting addon ingress-dns=true in "addons-131319"
	I0914 00:22:46.704975 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.705525 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.610976 1460604 addons.go:69] Setting inspektor-gadget=true in profile "addons-131319"
	I0914 00:22:46.712310 1460604 addons.go:234] Setting addon inspektor-gadget=true in "addons-131319"
	I0914 00:22:46.712366 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.610985 1460604 out.go:177] * Verifying Kubernetes components...
	I0914 00:22:46.652323 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.657770 1460604 addons.go:69] Setting volcano=true in profile "addons-131319"
	I0914 00:22:46.726793 1460604 addons.go:234] Setting addon volcano=true in "addons-131319"
	I0914 00:22:46.726834 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.727321 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.732603 1460604 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0914 00:22:46.735501 1460604 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0914 00:22:46.735527 1460604 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0914 00:22:46.735608 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:46.657789 1460604 addons.go:69] Setting volumesnapshots=true in profile "addons-131319"
	I0914 00:22:46.746177 1460604 addons.go:234] Setting addon volumesnapshots=true in "addons-131319"
	I0914 00:22:46.746307 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.747024 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.752039 1460604 addons.go:234] Setting addon default-storageclass=true in "addons-131319"
	I0914 00:22:46.752088 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.752635 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.759555 1460604 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 00:22:46.760187 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.774453 1460604 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 00:22:46.776177 1460604 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 00:22:46.784813 1460604 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 00:22:46.787623 1460604 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0914 00:22:46.787828 1460604 out.go:177]   - Using image docker.io/registry:2.8.3
	I0914 00:22:46.788280 1460604 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 00:22:46.821226 1460604 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 00:22:46.824220 1460604 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:22:46.824755 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 00:22:46.824865 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:46.834374 1460604 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 00:22:46.868327 1460604 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 00:22:46.868394 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0914 00:22:46.868479 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:46.873723 1460604 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0914 00:22:46.875216 1460604 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 00:22:46.882460 1460604 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 00:22:46.885147 1460604 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 00:22:46.878995 1460604 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 00:22:46.885908 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0914 00:22:46.885987 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:46.891512 1460604 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0914 00:22:46.893845 1460604 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 00:22:46.896352 1460604 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 00:22:46.900230 1460604 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 00:22:46.900288 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0914 00:22:46.900389 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:46.916204 1460604 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0914 00:22:46.918014 1460604 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 00:22:46.918038 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 00:22:46.918106 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:46.924444 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:46.879982 1460604 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-131319"
	I0914 00:22:46.926776 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.927248 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:46.928248 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:46.936687 1460604 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 00:22:46.939280 1460604 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0914 00:22:46.939460 1460604 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 00:22:46.939491 1460604 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 00:22:46.939562 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:46.943068 1460604 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0914 00:22:46.943091 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 00:22:46.943161 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:46.957008 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:46.970563 1460604 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0914 00:22:46.976235 1460604 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0914 00:22:46.983482 1460604 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0914 00:22:46.991746 1460604 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0914 00:22:47.018420 1460604 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0914 00:22:47.018505 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0914 00:22:47.018624 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:47.027339 1460604 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 00:22:47.027410 1460604 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 00:22:47.027513 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:47.051515 1460604 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 00:22:47.051543 1460604 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 00:22:47.051623 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:47.082635 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:47.105990 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:47.108784 1460604 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 00:22:47.113577 1460604 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 00:22:47.116034 1460604 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 00:22:47.116069 1460604 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 00:22:47.116146 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:47.126154 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:47.143526 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:47.155588 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:47.180280 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:47.192741 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:47.212057 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:47.217771 1460604 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0914 00:22:47.220100 1460604 out.go:177]   - Using image docker.io/busybox:stable
	I0914 00:22:47.221482 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:47.223054 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:47.231529 1460604 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 00:22:47.231554 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0914 00:22:47.231619 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:47.243125 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:47.262207 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	W0914 00:22:47.263587 1460604 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0914 00:22:47.263613 1460604 retry.go:31] will retry after 129.332635ms: ssh: handshake failed: EOF
	I0914 00:22:47.737420 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0914 00:22:47.766714 1460604 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 00:22:47.766738 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 00:22:47.831080 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 00:22:47.857168 1460604 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 00:22:47.857241 1460604 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 00:22:47.866827 1460604 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0914 00:22:47.866894 1460604 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0914 00:22:47.896881 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 00:22:47.901685 1460604 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 00:22:47.901712 1460604 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 00:22:47.927595 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 00:22:47.933644 1460604 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 00:22:47.933671 1460604 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 00:22:48.011533 1460604 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 00:22:48.011567 1460604 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 00:22:48.031892 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 00:22:48.080425 1460604 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 00:22:48.080466 1460604 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 00:22:48.084435 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 00:22:48.107782 1460604 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0914 00:22:48.107819 1460604 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0914 00:22:48.154714 1460604 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0914 00:22:48.154748 1460604 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0914 00:22:48.171233 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 00:22:48.260920 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 00:22:48.287072 1460604 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 00:22:48.287116 1460604 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 00:22:48.302205 1460604 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 00:22:48.302225 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 00:22:48.313691 1460604 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 00:22:48.313735 1460604 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 00:22:48.319198 1460604 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 00:22:48.319225 1460604 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 00:22:48.342843 1460604 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 00:22:48.342877 1460604 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 00:22:48.450666 1460604 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0914 00:22:48.450703 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0914 00:22:48.514625 1460604 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 00:22:48.514658 1460604 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 00:22:48.526540 1460604 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 00:22:48.526578 1460604 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 00:22:48.590759 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 00:22:48.603866 1460604 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 00:22:48.603890 1460604 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 00:22:48.607459 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 00:22:48.644255 1460604 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.535440465s)
	I0914 00:22:48.644390 1460604 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.856089371s)
	I0914 00:22:48.644441 1460604 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0914 00:22:48.645264 1460604 node_ready.go:35] waiting up to 6m0s for node "addons-131319" to be "Ready" ...
	I0914 00:22:48.649750 1460604 node_ready.go:49] node "addons-131319" has status "Ready":"True"
	I0914 00:22:48.649778 1460604 node_ready.go:38] duration metric: took 4.485113ms for node "addons-131319" to be "Ready" ...
	I0914 00:22:48.649789 1460604 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:22:48.667631 1460604 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dc7c8" in "kube-system" namespace to be "Ready" ...
	I0914 00:22:48.772961 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0914 00:22:48.775986 1460604 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 00:22:48.776057 1460604 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 00:22:48.780664 1460604 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 00:22:48.780734 1460604 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 00:22:48.944987 1460604 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 00:22:48.945052 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 00:22:49.087841 1460604 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 00:22:49.087939 1460604 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 00:22:49.148592 1460604 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-131319" context rescaled to 1 replicas
	I0914 00:22:49.152848 1460604 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 00:22:49.152919 1460604 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 00:22:49.277624 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 00:22:49.377624 1460604 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 00:22:49.377701 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 00:22:49.498729 1460604 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 00:22:49.498804 1460604 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 00:22:49.710253 1460604 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 00:22:49.710320 1460604 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 00:22:49.801053 1460604 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 00:22:49.801079 1460604 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 00:22:49.886309 1460604 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 00:22:49.886332 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 00:22:49.956717 1460604 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 00:22:49.956740 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0914 00:22:50.124305 1460604 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 00:22:50.124335 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 00:22:50.299183 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 00:22:50.506114 1460604 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 00:22:50.506143 1460604 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 00:22:50.684740 1460604 pod_ready.go:103] pod "coredns-7c65d6cfc9-dc7c8" in "kube-system" namespace has status "Ready":"False"
	I0914 00:22:50.808385 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 00:22:53.190040 1460604 pod_ready.go:103] pod "coredns-7c65d6cfc9-dc7c8" in "kube-system" namespace has status "Ready":"False"
	I0914 00:22:54.168068 1460604 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 00:22:54.168232 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:54.190717 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:54.576223 1460604 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 00:22:54.602513 1460604 addons.go:234] Setting addon gcp-auth=true in "addons-131319"
	I0914 00:22:54.602573 1460604 host.go:66] Checking if "addons-131319" exists ...
	I0914 00:22:54.603078 1460604 cli_runner.go:164] Run: docker container inspect addons-131319 --format={{.State.Status}}
	I0914 00:22:54.634363 1460604 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 00:22:54.634443 1460604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-131319
	I0914 00:22:54.659140 1460604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34624 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/addons-131319/id_rsa Username:docker}
	I0914 00:22:55.728752 1460604 pod_ready.go:103] pod "coredns-7c65d6cfc9-dc7c8" in "kube-system" namespace has status "Ready":"False"
	I0914 00:22:56.705967 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.968456146s)
	I0914 00:22:56.706113 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.87495453s)
	I0914 00:22:56.706274 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.809365309s)
	I0914 00:22:56.706296 1460604 addons.go:475] Verifying addon ingress=true in "addons-131319"
	I0914 00:22:56.706704 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.77903707s)
	I0914 00:22:56.706804 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.674882182s)
	I0914 00:22:56.706880 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.622412899s)
	I0914 00:22:56.706979 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.53572589s)
	I0914 00:22:56.707033 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.446087932s)
	I0914 00:22:56.707404 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.116613366s)
	I0914 00:22:56.707434 1460604 addons.go:475] Verifying addon metrics-server=true in "addons-131319"
	I0914 00:22:56.707481 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.09998537s)
	I0914 00:22:56.707504 1460604 addons.go:475] Verifying addon registry=true in "addons-131319"
	I0914 00:22:56.707995 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.934985836s)
	I0914 00:22:56.708112 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.430411489s)
	W0914 00:22:56.708144 1460604 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 00:22:56.708165 1460604 retry.go:31] will retry after 284.653202ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 00:22:56.708231 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.409018873s)
	I0914 00:22:56.709677 1460604 out.go:177] * Verifying registry addon...
	I0914 00:22:56.709744 1460604 out.go:177] * Verifying ingress addon...
	I0914 00:22:56.711096 1460604 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-131319 service yakd-dashboard -n yakd-dashboard
	
	I0914 00:22:56.713409 1460604 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0914 00:22:56.714739 1460604 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 00:22:56.747837 1460604 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 00:22:56.747956 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:22:56.753824 1460604 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 00:22:56.753884 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0914 00:22:56.804793 1460604 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0914 00:22:56.993721 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 00:22:57.225206 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:22:57.225929 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:22:57.549457 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.741025253s)
	I0914 00:22:57.549533 1460604 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-131319"
	I0914 00:22:57.549753 1460604 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.915357869s)
	I0914 00:22:57.553163 1460604 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 00:22:57.553290 1460604 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 00:22:57.556095 1460604 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 00:22:57.558845 1460604 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0914 00:22:57.561535 1460604 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 00:22:57.561592 1460604 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 00:22:57.563916 1460604 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 00:22:57.563941 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:22:57.618992 1460604 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 00:22:57.619019 1460604 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 00:22:57.714935 1460604 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 00:22:57.714967 1460604 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0914 00:22:57.720010 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:22:57.720089 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:22:57.785127 1460604 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 00:22:58.062879 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:22:58.175503 1460604 pod_ready.go:103] pod "coredns-7c65d6cfc9-dc7c8" in "kube-system" namespace has status "Ready":"False"
	I0914 00:22:58.219390 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:22:58.219685 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:22:58.512420 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.518635601s)
	I0914 00:22:58.562219 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:22:58.721954 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:22:58.723274 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:22:58.816741 1460604 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.031525259s)
	I0914 00:22:58.820448 1460604 addons.go:475] Verifying addon gcp-auth=true in "addons-131319"
	I0914 00:22:58.822978 1460604 out.go:177] * Verifying gcp-auth addon...
	I0914 00:22:58.825746 1460604 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 00:22:58.828340 1460604 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 00:22:59.061449 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:22:59.217473 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:22:59.220881 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:22:59.561389 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:22:59.718173 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:22:59.720009 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:00.093266 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:00.180761 1460604 pod_ready.go:103] pod "coredns-7c65d6cfc9-dc7c8" in "kube-system" namespace has status "Ready":"False"
	I0914 00:23:00.259284 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:00.333619 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:00.563567 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:00.719666 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:00.722232 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:01.061336 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:01.220503 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:01.221552 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:01.562293 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:01.720664 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:01.722177 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:02.062701 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:02.219605 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:02.220827 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:02.561890 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:02.705590 1460604 pod_ready.go:103] pod "coredns-7c65d6cfc9-dc7c8" in "kube-system" namespace has status "Ready":"False"
	I0914 00:23:02.720174 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:02.721648 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:03.062423 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:03.217219 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:03.220247 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:03.561337 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:03.674573 1460604 pod_ready.go:93] pod "coredns-7c65d6cfc9-dc7c8" in "kube-system" namespace has status "Ready":"True"
	I0914 00:23:03.674646 1460604 pod_ready.go:82] duration metric: took 15.006979986s for pod "coredns-7c65d6cfc9-dc7c8" in "kube-system" namespace to be "Ready" ...
	I0914 00:23:03.674685 1460604 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vhmnt" in "kube-system" namespace to be "Ready" ...
	I0914 00:23:03.677077 1460604 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-vhmnt" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-vhmnt" not found
	I0914 00:23:03.677142 1460604 pod_ready.go:82] duration metric: took 2.429284ms for pod "coredns-7c65d6cfc9-vhmnt" in "kube-system" namespace to be "Ready" ...
	E0914 00:23:03.677178 1460604 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-vhmnt" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-vhmnt" not found
	I0914 00:23:03.677211 1460604 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-131319" in "kube-system" namespace to be "Ready" ...
	I0914 00:23:03.682980 1460604 pod_ready.go:93] pod "etcd-addons-131319" in "kube-system" namespace has status "Ready":"True"
	I0914 00:23:03.683016 1460604 pod_ready.go:82] duration metric: took 5.778581ms for pod "etcd-addons-131319" in "kube-system" namespace to be "Ready" ...
	I0914 00:23:03.683037 1460604 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-131319" in "kube-system" namespace to be "Ready" ...
	I0914 00:23:03.688987 1460604 pod_ready.go:93] pod "kube-apiserver-addons-131319" in "kube-system" namespace has status "Ready":"True"
	I0914 00:23:03.689012 1460604 pod_ready.go:82] duration metric: took 5.965157ms for pod "kube-apiserver-addons-131319" in "kube-system" namespace to be "Ready" ...
	I0914 00:23:03.689025 1460604 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-131319" in "kube-system" namespace to be "Ready" ...
	I0914 00:23:03.697538 1460604 pod_ready.go:93] pod "kube-controller-manager-addons-131319" in "kube-system" namespace has status "Ready":"True"
	I0914 00:23:03.697563 1460604 pod_ready.go:82] duration metric: took 8.528815ms for pod "kube-controller-manager-addons-131319" in "kube-system" namespace to be "Ready" ...
	I0914 00:23:03.697576 1460604 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g7jvd" in "kube-system" namespace to be "Ready" ...
	I0914 00:23:03.721154 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:03.721932 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:03.871797 1460604 pod_ready.go:93] pod "kube-proxy-g7jvd" in "kube-system" namespace has status "Ready":"True"
	I0914 00:23:03.871825 1460604 pod_ready.go:82] duration metric: took 174.239018ms for pod "kube-proxy-g7jvd" in "kube-system" namespace to be "Ready" ...
	I0914 00:23:03.871837 1460604 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-131319" in "kube-system" namespace to be "Ready" ...
	I0914 00:23:04.064186 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:04.218033 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:04.219784 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:04.284926 1460604 pod_ready.go:93] pod "kube-scheduler-addons-131319" in "kube-system" namespace has status "Ready":"True"
	I0914 00:23:04.285029 1460604 pod_ready.go:82] duration metric: took 413.171358ms for pod "kube-scheduler-addons-131319" in "kube-system" namespace to be "Ready" ...
	I0914 00:23:04.285057 1460604 pod_ready.go:39] duration metric: took 15.635228195s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 00:23:04.285107 1460604 api_server.go:52] waiting for apiserver process to appear ...
	I0914 00:23:04.285228 1460604 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:23:04.325233 1460604 api_server.go:72] duration metric: took 17.720469082s to wait for apiserver process to appear ...
	I0914 00:23:04.325325 1460604 api_server.go:88] waiting for apiserver healthz status ...
	I0914 00:23:04.325362 1460604 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 00:23:04.336331 1460604 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0914 00:23:04.337573 1460604 api_server.go:141] control plane version: v1.31.1
	I0914 00:23:04.337598 1460604 api_server.go:131] duration metric: took 12.252962ms to wait for apiserver health ...
	I0914 00:23:04.337608 1460604 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 00:23:04.487490 1460604 system_pods.go:59] 18 kube-system pods found
	I0914 00:23:04.487539 1460604 system_pods.go:61] "coredns-7c65d6cfc9-dc7c8" [f382f521-0e1f-4364-a700-a7a7508f2be9] Running
	I0914 00:23:04.487574 1460604 system_pods.go:61] "csi-hostpath-attacher-0" [54b9b4ab-4266-4f4a-a756-cea3af895d01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 00:23:04.487595 1460604 system_pods.go:61] "csi-hostpath-resizer-0" [9a3a0a5f-6bfd-45c6-bf8b-e2b26f464523] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 00:23:04.487605 1460604 system_pods.go:61] "csi-hostpathplugin-f7rhq" [f9dcc47e-53f5-4d04-a45c-27b884956379] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 00:23:04.487615 1460604 system_pods.go:61] "etcd-addons-131319" [0890cf79-9024-4b26-9b79-a58e59a7de3d] Running
	I0914 00:23:04.487623 1460604 system_pods.go:61] "kindnet-sshwv" [2901683e-8722-41d1-8ba0-02d7a59421e2] Running
	I0914 00:23:04.487627 1460604 system_pods.go:61] "kube-apiserver-addons-131319" [5b68c9d0-3ac0-4d14-a978-5a5a76a0e2c7] Running
	I0914 00:23:04.487650 1460604 system_pods.go:61] "kube-controller-manager-addons-131319" [a169ad4c-eeb2-4720-9b96-d559f8a82b1d] Running
	I0914 00:23:04.487660 1460604 system_pods.go:61] "kube-ingress-dns-minikube" [d5e68c79-97b7-4709-a57e-70c7998c0956] Running
	I0914 00:23:04.487664 1460604 system_pods.go:61] "kube-proxy-g7jvd" [fbb9c82c-6967-4c99-8fa7-74443e67ceb5] Running
	I0914 00:23:04.487668 1460604 system_pods.go:61] "kube-scheduler-addons-131319" [28d6802f-2792-46de-ae45-4f9349e77b69] Running
	I0914 00:23:04.487681 1460604 system_pods.go:61] "metrics-server-84c5f94fbc-ltmw6" [8091786a-c4c2-4358-bc15-288b7232a51f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 00:23:04.487689 1460604 system_pods.go:61] "nvidia-device-plugin-daemonset-88zhs" [72d14544-2ba6-426a-8bed-2ae9afb79959] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0914 00:23:04.487699 1460604 system_pods.go:61] "registry-66c9cd494c-xcfqw" [2e57b878-f2f9-4d80-a055-4ca334d60419] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0914 00:23:04.487706 1460604 system_pods.go:61] "registry-proxy-thrrk" [505e6aed-b3d3-4b91-aaf7-d36064f56137] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 00:23:04.487737 1460604 system_pods.go:61] "snapshot-controller-56fcc65765-fvdpn" [0d0d0432-4d26-4386-85f3-3b35a1b001b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 00:23:04.487752 1460604 system_pods.go:61] "snapshot-controller-56fcc65765-knqhg" [6696ce31-48df-461f-bef7-9b39c21ddcc1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 00:23:04.487761 1460604 system_pods.go:61] "storage-provisioner" [d8b157bc-0742-4773-a7e1-c5d81cf40dbc] Running
	I0914 00:23:04.487773 1460604 system_pods.go:74] duration metric: took 150.158921ms to wait for pod list to return data ...
	I0914 00:23:04.487782 1460604 default_sa.go:34] waiting for default service account to be created ...
	I0914 00:23:04.568859 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:04.671236 1460604 default_sa.go:45] found service account: "default"
	I0914 00:23:04.671266 1460604 default_sa.go:55] duration metric: took 183.473062ms for default service account to be created ...
	I0914 00:23:04.671277 1460604 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 00:23:04.731175 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:04.732182 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:04.881623 1460604 system_pods.go:86] 18 kube-system pods found
	I0914 00:23:04.881723 1460604 system_pods.go:89] "coredns-7c65d6cfc9-dc7c8" [f382f521-0e1f-4364-a700-a7a7508f2be9] Running
	I0914 00:23:04.881761 1460604 system_pods.go:89] "csi-hostpath-attacher-0" [54b9b4ab-4266-4f4a-a756-cea3af895d01] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 00:23:04.881790 1460604 system_pods.go:89] "csi-hostpath-resizer-0" [9a3a0a5f-6bfd-45c6-bf8b-e2b26f464523] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 00:23:04.881838 1460604 system_pods.go:89] "csi-hostpathplugin-f7rhq" [f9dcc47e-53f5-4d04-a45c-27b884956379] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 00:23:04.881870 1460604 system_pods.go:89] "etcd-addons-131319" [0890cf79-9024-4b26-9b79-a58e59a7de3d] Running
	I0914 00:23:04.881893 1460604 system_pods.go:89] "kindnet-sshwv" [2901683e-8722-41d1-8ba0-02d7a59421e2] Running
	I0914 00:23:04.881916 1460604 system_pods.go:89] "kube-apiserver-addons-131319" [5b68c9d0-3ac0-4d14-a978-5a5a76a0e2c7] Running
	I0914 00:23:04.881938 1460604 system_pods.go:89] "kube-controller-manager-addons-131319" [a169ad4c-eeb2-4720-9b96-d559f8a82b1d] Running
	I0914 00:23:04.881977 1460604 system_pods.go:89] "kube-ingress-dns-minikube" [d5e68c79-97b7-4709-a57e-70c7998c0956] Running
	I0914 00:23:04.882008 1460604 system_pods.go:89] "kube-proxy-g7jvd" [fbb9c82c-6967-4c99-8fa7-74443e67ceb5] Running
	I0914 00:23:04.882032 1460604 system_pods.go:89] "kube-scheduler-addons-131319" [28d6802f-2792-46de-ae45-4f9349e77b69] Running
	I0914 00:23:04.882056 1460604 system_pods.go:89] "metrics-server-84c5f94fbc-ltmw6" [8091786a-c4c2-4358-bc15-288b7232a51f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 00:23:04.882083 1460604 system_pods.go:89] "nvidia-device-plugin-daemonset-88zhs" [72d14544-2ba6-426a-8bed-2ae9afb79959] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0914 00:23:04.882117 1460604 system_pods.go:89] "registry-66c9cd494c-xcfqw" [2e57b878-f2f9-4d80-a055-4ca334d60419] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0914 00:23:04.882141 1460604 system_pods.go:89] "registry-proxy-thrrk" [505e6aed-b3d3-4b91-aaf7-d36064f56137] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 00:23:04.882164 1460604 system_pods.go:89] "snapshot-controller-56fcc65765-fvdpn" [0d0d0432-4d26-4386-85f3-3b35a1b001b2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 00:23:04.882188 1460604 system_pods.go:89] "snapshot-controller-56fcc65765-knqhg" [6696ce31-48df-461f-bef7-9b39c21ddcc1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 00:23:04.882218 1460604 system_pods.go:89] "storage-provisioner" [d8b157bc-0742-4773-a7e1-c5d81cf40dbc] Running
	I0914 00:23:04.882250 1460604 system_pods.go:126] duration metric: took 210.964126ms to wait for k8s-apps to be running ...
	I0914 00:23:04.882282 1460604 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 00:23:04.882364 1460604 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:23:04.905257 1460604 system_svc.go:56] duration metric: took 22.965443ms WaitForService to wait for kubelet
	I0914 00:23:04.905326 1460604 kubeadm.go:582] duration metric: took 18.300565802s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 00:23:04.905360 1460604 node_conditions.go:102] verifying NodePressure condition ...
	I0914 00:23:05.062756 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:05.072375 1460604 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 00:23:05.072410 1460604 node_conditions.go:123] node cpu capacity is 2
	I0914 00:23:05.072424 1460604 node_conditions.go:105] duration metric: took 167.040996ms to run NodePressure ...
	I0914 00:23:05.072437 1460604 start.go:241] waiting for startup goroutines ...
	I0914 00:23:05.220416 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:05.221721 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:05.579169 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:05.720463 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:05.721267 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:06.062138 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:06.221544 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:06.223187 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:06.594379 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:06.723041 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:06.724406 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:07.063028 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:07.218191 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:07.220211 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:07.560877 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:07.723241 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:07.724798 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:08.060717 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:08.220593 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:08.220935 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:08.562162 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:08.719953 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:08.720656 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:09.062281 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:09.219425 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:09.220958 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:09.561498 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:09.718954 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:09.720263 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:10.060902 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:10.218625 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:10.219575 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:10.561267 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:10.717306 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:10.720021 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:11.067728 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:11.218251 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:11.222118 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:11.564469 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:11.719542 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:11.720681 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:12.061828 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:12.221440 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:12.222393 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:12.564162 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:12.717885 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:12.721524 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:13.062030 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:13.221199 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:13.222476 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:13.561420 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:13.717558 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:13.719884 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:14.061349 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:14.217927 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:14.220131 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:14.561916 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:14.720662 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:14.721737 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:15.062382 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:15.219517 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:15.220558 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:15.561373 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:15.717393 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:15.720324 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:16.061842 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:16.219640 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:16.221007 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:16.561063 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:16.720127 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:16.720909 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:17.062431 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:17.222122 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:17.223158 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:17.561476 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:17.720178 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:17.722360 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:18.062145 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:18.218817 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:18.219786 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:18.561023 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:18.718895 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:18.719957 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:19.061650 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:19.221864 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:19.222571 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:19.564544 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:19.723092 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:19.724509 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:20.062078 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:20.226610 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:20.227595 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:20.567204 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:20.736784 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:20.737685 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:21.062166 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:21.232135 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:21.233283 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:21.562696 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:21.721058 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:21.722463 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:22.062316 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:22.217079 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:22.220585 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:22.561314 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:22.720806 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:22.722575 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:23.060638 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:23.223168 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:23.224798 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:23.561273 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:23.718852 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:23.719361 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:24.061900 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:24.218078 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:24.219823 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:24.561646 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:24.719702 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:24.720829 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:25.062236 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:25.218667 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:25.220374 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:25.561204 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:25.718886 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:25.719249 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:26.061932 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:26.218765 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:26.220585 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:26.561758 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:26.720330 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:26.723193 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:27.062101 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:27.218059 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:27.221713 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:27.562389 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:27.722289 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:27.723987 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:28.061532 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:28.221239 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:28.223910 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:28.561642 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:28.718515 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:28.721649 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:29.061684 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:29.219483 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:29.220765 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:29.562837 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:29.722828 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:29.727134 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:30.063553 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:30.221399 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:30.222536 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:30.561702 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:30.729433 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:30.730400 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:31.062901 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:31.221211 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:31.221819 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:31.561231 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:31.719332 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:31.720555 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:32.061841 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:32.218255 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:32.222638 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:32.562204 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:32.719836 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:32.720278 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:33.063548 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:33.219773 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:33.220562 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:33.561493 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:33.719054 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:33.719634 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 00:23:34.065830 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:34.218281 1460604 kapi.go:107] duration metric: took 37.504880689s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 00:23:34.219836 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:34.562344 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:34.719700 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:35.061911 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:35.219886 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:35.563207 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:35.719832 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:36.066304 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:36.222137 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:36.561244 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:36.719039 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:37.061023 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:37.219929 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:37.561930 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:37.720453 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:38.062318 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:38.218951 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:38.560902 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:38.719380 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:39.061709 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:39.220257 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:39.560677 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:39.719565 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:40.061820 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:40.220317 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:40.562297 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:40.721702 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:41.062039 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:41.225485 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:41.561381 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:41.719665 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:42.061749 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:42.219465 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:42.561135 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:42.719994 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:43.062113 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:43.220103 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:43.561512 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:43.720297 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:44.061748 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:44.226373 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:44.563734 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:44.722127 1460604 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 00:23:45.064971 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:45.223225 1460604 kapi.go:107] duration metric: took 48.508480327s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 00:23:45.562799 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:46.061502 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:46.569700 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:47.062300 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:47.561826 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:48.062667 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:48.561368 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:49.061446 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:49.561776 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:50.060941 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:50.560882 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:51.061210 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:51.564926 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:52.060432 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:52.560738 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 00:23:53.061653 1460604 kapi.go:107] duration metric: took 55.505559996s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 00:24:22.329485 1460604 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 00:24:22.329521 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:22.829899 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:23.329843 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:23.830462 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:24.329582 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:24.829934 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:25.329609 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:25.831204 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:26.329651 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:26.830044 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:27.330038 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:27.830953 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:28.329290 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:28.830296 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:29.330039 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:29.829539 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:30.329313 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:30.829778 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:31.328882 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:31.829799 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:32.329189 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:32.829736 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:33.329762 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:33.830312 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:34.329633 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:34.829838 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:35.329821 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:35.829960 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:36.329218 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:36.829891 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:37.329669 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:37.829842 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:38.329906 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:38.829820 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:39.329905 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:39.830870 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:40.330550 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:40.829816 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:41.329989 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:41.829988 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:42.329787 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:42.829842 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:43.329875 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:43.830617 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:44.329852 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:44.829973 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:45.330265 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:45.830596 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:46.329192 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:46.830328 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:47.329285 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:47.829873 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:48.329641 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:48.829867 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:49.330408 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:49.830446 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:50.329768 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:50.830149 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:51.329604 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:51.830276 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:52.330061 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:52.829922 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:53.329575 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:53.830806 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:54.329231 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:54.829501 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:55.330042 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:55.830183 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:56.329570 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:56.830152 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:57.329967 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:57.829883 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:58.328938 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:58.831777 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:59.329745 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:24:59.829911 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:00.367296 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:00.829141 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:01.330395 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:01.845985 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:02.330691 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:02.829931 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:03.330195 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:03.830415 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:04.329741 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:04.829606 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:05.336000 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:05.831711 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:06.329667 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:06.829422 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:07.329284 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:07.829859 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:08.329870 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:08.829523 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:09.329328 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:09.829828 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:10.329791 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:10.830448 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:11.330228 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:11.829514 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:12.329741 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:12.829575 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:13.330163 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:13.830484 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:14.330636 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:14.829053 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:15.329666 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:15.829780 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:16.328977 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:16.830390 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:17.328793 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:17.830050 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:18.329882 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:18.829313 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:19.329096 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:19.830391 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:20.329989 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:20.829687 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:21.329058 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:21.831278 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:22.328845 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:22.829546 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:23.331415 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:23.829877 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:24.330244 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:24.830067 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:25.329946 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:25.829864 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:26.329729 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:26.829484 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:27.329976 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:27.829462 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:28.329302 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:28.829425 1460604 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 00:25:29.329395 1460604 kapi.go:107] duration metric: took 2m30.50364767s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 00:25:29.331722 1460604 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-131319 cluster.
	I0914 00:25:29.333351 1460604 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 00:25:29.335145 1460604 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 00:25:29.337087 1460604 out.go:177] * Enabled addons: volcano, storage-provisioner, nvidia-device-plugin, ingress-dns, cloud-spanner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0914 00:25:29.338877 1460604 addons.go:510] duration metric: took 2m42.733711257s for enable addons: enabled=[volcano storage-provisioner nvidia-device-plugin ingress-dns cloud-spanner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0914 00:25:29.338930 1460604 start.go:246] waiting for cluster config update ...
	I0914 00:25:29.338951 1460604 start.go:255] writing updated cluster config ...
	I0914 00:25:29.339745 1460604 ssh_runner.go:195] Run: rm -f paused
	I0914 00:25:29.675558 1460604 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 00:25:29.677509 1460604 out.go:177] * Done! kubectl is now configured to use "addons-131319" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	108b7f75da8ea       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   de44d19e07013       gadget-b9bjg
	74d0946366e2e       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   de73d4db0924b       gcp-auth-89d5ffd79-kk2v5
	88e428d747c98       8b46b1cd48760       4 minutes ago       Running             admission                                0                   2be4cd59d0a1f       volcano-admission-77d7d48b68-spsmg
	c4896bd718553       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   405ca1e3ecd30       csi-hostpathplugin-f7rhq
	b9171e5c8d21a       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   405ca1e3ecd30       csi-hostpathplugin-f7rhq
	02b4145f17cb5       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   405ca1e3ecd30       csi-hostpathplugin-f7rhq
	3be6826d2fcd6       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   405ca1e3ecd30       csi-hostpathplugin-f7rhq
	99d17f11c4e1f       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   405ca1e3ecd30       csi-hostpathplugin-f7rhq
	33ecd942767b0       289a818c8d9c5       5 minutes ago       Running             controller                               0                   4e4dddb5916c0       ingress-nginx-controller-bc57996ff-hwbq4
	6fe010a1c6421       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   06b3635d1b33f       csi-hostpath-resizer-0
	4c187c63839f1       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   36f8ff5c62a68       volcano-controllers-56675bb4d5-7cvvq
	44711f8016161       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   405ca1e3ecd30       csi-hostpathplugin-f7rhq
	1b9d6b3ea6acc       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   03a81566076de       csi-hostpath-attacher-0
	e52a3d3549513       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   da56f8e4b51c7       registry-66c9cd494c-xcfqw
	2a8301c12071a       420193b27261a       5 minutes ago       Exited              patch                                    0                   2b9270c673f56       ingress-nginx-admission-patch-6x4vs
	c3e3df4dc49f5       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   18fded48ba9d7       snapshot-controller-56fcc65765-knqhg
	47610afe825af       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   78df62aa03393       registry-proxy-thrrk
	9720d9bb49947       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   23fa04c664209       local-path-provisioner-86d989889c-d7dnn
	331bd580804b6       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   6d593daf47e79       snapshot-controller-56fcc65765-fvdpn
	891e7b6f9bdc1       420193b27261a       5 minutes ago       Exited              create                                   0                   c0d700512212e       ingress-nginx-admission-create-h4ssl
	8789c36f6ec1f       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   0ea7a55596632       metrics-server-84c5f94fbc-ltmw6
	beeec4a56c138       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   a8fc47f839bb7       cloud-spanner-emulator-769b77f747-7qcsk
	a62dc2e18abef       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   df9ac4886250b       nvidia-device-plugin-daemonset-88zhs
	978afd77bb4d1       77bdba588b953       5 minutes ago       Running             yakd                                     0                   507bca89b0462       yakd-dashboard-67d98fc6b-fn4qh
	ce5d8255ca4e9       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   f52d053271600       volcano-scheduler-576bc46687-dj5ch
	b6cd0934c237b       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   3b28aef783143       coredns-7c65d6cfc9-dc7c8
	f3e2770051789       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   a1effe5d972cc       kube-ingress-dns-minikube
	8e4d3c8ad3a1e       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   0b0d5c4cf427c       storage-provisioner
	f133c3aa865ad       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   db10826d0f633       kindnet-sshwv
	964775faf6059       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   47fdd7b6507e5       kube-proxy-g7jvd
	53568138a0177       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   2daaad2600387       kube-controller-manager-addons-131319
	4ba9b8f45896a       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   a68f7b01d52c0       kube-scheduler-addons-131319
	9089dc82f69c6       27e3830e14027       6 minutes ago       Running             etcd                                     0                   32b0cdcf31f9d       etcd-addons-131319
	d1f5b9ef5bfa0       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   a601370270c66       kube-apiserver-addons-131319
	
	
	==> containerd <==
	Sep 14 00:25:41 addons-131319 containerd[813]: time="2024-09-14T00:25:41.415472808Z" level=info msg="RemovePodSandbox \"107e8500aa37845a89e5f374ee59781f3de320659d4d4301b90552db6d52b14c\" returns successfully"
	Sep 14 00:26:25 addons-131319 containerd[813]: time="2024-09-14T00:26:25.265739062Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 14 00:26:25 addons-131319 containerd[813]: time="2024-09-14T00:26:25.381360348Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 14 00:26:25 addons-131319 containerd[813]: time="2024-09-14T00:26:25.390978128Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 14 00:26:25 addons-131319 containerd[813]: time="2024-09-14T00:26:25.394257042Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 128.469947ms"
	Sep 14 00:26:25 addons-131319 containerd[813]: time="2024-09-14T00:26:25.394306322Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 14 00:26:25 addons-131319 containerd[813]: time="2024-09-14T00:26:25.396756389Z" level=info msg="CreateContainer within sandbox \"de44d19e07013fd1fb43267d52b5499c4a3d6d5716b7ee4954006312a5672723\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 14 00:26:25 addons-131319 containerd[813]: time="2024-09-14T00:26:25.414029347Z" level=info msg="CreateContainer within sandbox \"de44d19e07013fd1fb43267d52b5499c4a3d6d5716b7ee4954006312a5672723\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c\""
	Sep 14 00:26:25 addons-131319 containerd[813]: time="2024-09-14T00:26:25.414718838Z" level=info msg="StartContainer for \"108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c\""
	Sep 14 00:26:25 addons-131319 containerd[813]: time="2024-09-14T00:26:25.469317960Z" level=info msg="StartContainer for \"108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c\" returns successfully"
	Sep 14 00:26:26 addons-131319 containerd[813]: time="2024-09-14T00:26:26.975837495Z" level=info msg="shim disconnected" id=108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c namespace=k8s.io
	Sep 14 00:26:26 addons-131319 containerd[813]: time="2024-09-14T00:26:26.975971205Z" level=warning msg="cleaning up after shim disconnected" id=108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c namespace=k8s.io
	Sep 14 00:26:26 addons-131319 containerd[813]: time="2024-09-14T00:26:26.975982914Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 14 00:26:27 addons-131319 containerd[813]: time="2024-09-14T00:26:27.374256571Z" level=info msg="RemoveContainer for \"36b97ad0ea110724c3456b6108bc5d0621d8cb1afac74756e5bff2368fee29f8\""
	Sep 14 00:26:27 addons-131319 containerd[813]: time="2024-09-14T00:26:27.381691138Z" level=info msg="RemoveContainer for \"36b97ad0ea110724c3456b6108bc5d0621d8cb1afac74756e5bff2368fee29f8\" returns successfully"
	Sep 14 00:26:41 addons-131319 containerd[813]: time="2024-09-14T00:26:41.419824179Z" level=info msg="RemoveContainer for \"4f7cdcddc5bf22e2e05fa81c5f60a5b6f2065f35472bff040f40336e76736e6a\""
	Sep 14 00:26:41 addons-131319 containerd[813]: time="2024-09-14T00:26:41.427602368Z" level=info msg="RemoveContainer for \"4f7cdcddc5bf22e2e05fa81c5f60a5b6f2065f35472bff040f40336e76736e6a\" returns successfully"
	Sep 14 00:26:41 addons-131319 containerd[813]: time="2024-09-14T00:26:41.431954230Z" level=info msg="StopPodSandbox for \"673f90123f851ff664968138832c5843ca183c14aa75e72c5f1b516f96b5ff78\""
	Sep 14 00:26:41 addons-131319 containerd[813]: time="2024-09-14T00:26:41.441177787Z" level=info msg="TearDown network for sandbox \"673f90123f851ff664968138832c5843ca183c14aa75e72c5f1b516f96b5ff78\" successfully"
	Sep 14 00:26:41 addons-131319 containerd[813]: time="2024-09-14T00:26:41.441217910Z" level=info msg="StopPodSandbox for \"673f90123f851ff664968138832c5843ca183c14aa75e72c5f1b516f96b5ff78\" returns successfully"
	Sep 14 00:26:41 addons-131319 containerd[813]: time="2024-09-14T00:26:41.441708198Z" level=info msg="RemovePodSandbox for \"673f90123f851ff664968138832c5843ca183c14aa75e72c5f1b516f96b5ff78\""
	Sep 14 00:26:41 addons-131319 containerd[813]: time="2024-09-14T00:26:41.441757822Z" level=info msg="Forcibly stopping sandbox \"673f90123f851ff664968138832c5843ca183c14aa75e72c5f1b516f96b5ff78\""
	Sep 14 00:26:41 addons-131319 containerd[813]: time="2024-09-14T00:26:41.449906881Z" level=info msg="TearDown network for sandbox \"673f90123f851ff664968138832c5843ca183c14aa75e72c5f1b516f96b5ff78\" successfully"
	Sep 14 00:26:41 addons-131319 containerd[813]: time="2024-09-14T00:26:41.457375984Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"673f90123f851ff664968138832c5843ca183c14aa75e72c5f1b516f96b5ff78\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 14 00:26:41 addons-131319 containerd[813]: time="2024-09-14T00:26:41.457496984Z" level=info msg="RemovePodSandbox \"673f90123f851ff664968138832c5843ca183c14aa75e72c5f1b516f96b5ff78\" returns successfully"
	
	
	==> coredns [b6cd0934c237b612e42e84be7571250f808d19083ad528df10aaa8aea3e6d1db] <==
	[INFO] 10.244.0.8:55648 - 14140 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039081s
	[INFO] 10.244.0.8:54885 - 1528 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001627383s
	[INFO] 10.244.0.8:54885 - 6906 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001849077s
	[INFO] 10.244.0.8:38923 - 27202 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000055664s
	[INFO] 10.244.0.8:38923 - 30016 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000037465s
	[INFO] 10.244.0.8:36250 - 58725 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000146757s
	[INFO] 10.244.0.8:36250 - 65377 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000036119s
	[INFO] 10.244.0.8:41080 - 47214 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000059307s
	[INFO] 10.244.0.8:41080 - 60259 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034223s
	[INFO] 10.244.0.8:51163 - 58644 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000038958s
	[INFO] 10.244.0.8:51163 - 53270 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043512s
	[INFO] 10.244.0.8:37387 - 4232 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.009740118s
	[INFO] 10.244.0.8:37387 - 9350 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.010484353s
	[INFO] 10.244.0.8:32918 - 15857 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000048188s
	[INFO] 10.244.0.8:32918 - 11508 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000038876s
	[INFO] 10.244.0.24:60777 - 36020 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00032269s
	[INFO] 10.244.0.24:52759 - 55462 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00025746s
	[INFO] 10.244.0.24:57641 - 16333 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00019314s
	[INFO] 10.244.0.24:57550 - 43973 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011771s
	[INFO] 10.244.0.24:35114 - 11496 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138674s
	[INFO] 10.244.0.24:56302 - 33754 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000094925s
	[INFO] 10.244.0.24:42107 - 50909 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002746346s
	[INFO] 10.244.0.24:47569 - 56000 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002876388s
	[INFO] 10.244.0.24:45818 - 23619 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001572212s
	[INFO] 10.244.0.24:38865 - 59979 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001723908s
	
	
	==> describe nodes <==
	Name:               addons-131319
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-131319
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=addons-131319
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T00_22_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-131319
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-131319"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 00:22:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-131319
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 00:28:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 00:25:44 +0000   Sat, 14 Sep 2024 00:22:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 00:25:44 +0000   Sat, 14 Sep 2024 00:22:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 00:25:44 +0000   Sat, 14 Sep 2024 00:22:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 00:25:44 +0000   Sat, 14 Sep 2024 00:22:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-131319
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 90f06a4a5d3a40ecbdee7641d80edad0
	  System UUID:                8dedbe16-f400-409f-9a74-40bad6de1869
	  Boot ID:                    31d76137-2e5d-4866-b75b-16f7e69e7ff6
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-7qcsk     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  gadget                      gadget-b9bjg                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-kk2v5                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-hwbq4    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-dc7c8                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-f7rhq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-131319                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m8s
	  kube-system                 kindnet-sshwv                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m4s
	  kube-system                 kube-apiserver-addons-131319                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-131319       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-proxy-g7jvd                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-addons-131319                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 metrics-server-84c5f94fbc-ltmw6             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m58s
	  kube-system                 nvidia-device-plugin-daemonset-88zhs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-66c9cd494c-xcfqw                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 registry-proxy-thrrk                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 snapshot-controller-56fcc65765-fvdpn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-knqhg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  local-path-storage          local-path-provisioner-86d989889c-d7dnn     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  volcano-system              volcano-admission-77d7d48b68-spsmg          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-7cvvq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-scheduler-576bc46687-dj5ch          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-fn4qh              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 6m1s  kube-proxy       
	  Normal   Starting                 6m8s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s  kubelet          Node addons-131319 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s  kubelet          Node addons-131319 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s  kubelet          Node addons-131319 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m4s  node-controller  Node addons-131319 event: Registered Node addons-131319 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [9089dc82f69c668d9f6f7c3aa3a61bfbf8fe4899afe3d714436a0638df4f3cd0] <==
	{"level":"info","ts":"2024-09-14T00:22:35.170007Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T00:22:35.170245Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T00:22:35.170273Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T00:22:35.176655Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-14T00:22:35.176686Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-14T00:22:35.234790Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-14T00:22:35.234865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-14T00:22:35.234911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-14T00:22:35.234934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T00:22:35.234942Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-14T00:22:35.234953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-14T00:22:35.234974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-14T00:22:35.235813Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:22:35.236119Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:22:35.235787Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-131319 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T00:22:35.237500Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T00:22:35.243616Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:22:35.244651Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-14T00:22:35.244939Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:22:35.245177Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:22:35.245275Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T00:22:35.246571Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T00:22:35.247884Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T00:22:35.248013Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T00:22:35.252793Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [74d0946366e2e0e98ee81d93789e9a51aacab98320087546ac10bf339a5dcbdf] <==
	2024/09/14 00:25:29 GCP Auth Webhook started!
	2024/09/14 00:25:47 Ready to marshal response ...
	2024/09/14 00:25:47 Ready to write response ...
	2024/09/14 00:25:47 Ready to marshal response ...
	2024/09/14 00:25:47 Ready to write response ...
	
	
	==> kernel <==
	 00:28:49 up  8:11,  0 users,  load average: 0.42, 1.12, 1.93
	Linux addons-131319 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f133c3aa865ada398faab5ba5c9622ab842c01f127b5eb4e41db89dd7b0fa0ff] <==
	I0914 00:26:47.937604       1 main.go:299] handling current node
	I0914 00:26:57.944391       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:26:57.944428       1 main.go:299] handling current node
	I0914 00:27:07.944263       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:27:07.944300       1 main.go:299] handling current node
	I0914 00:27:17.943977       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:27:17.944012       1 main.go:299] handling current node
	I0914 00:27:27.944076       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:27:27.944112       1 main.go:299] handling current node
	I0914 00:27:37.940153       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:27:37.940256       1 main.go:299] handling current node
	I0914 00:27:47.937165       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:27:47.937202       1 main.go:299] handling current node
	I0914 00:27:57.944000       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:27:57.944040       1 main.go:299] handling current node
	I0914 00:28:07.942279       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:28:07.942485       1 main.go:299] handling current node
	I0914 00:28:17.946212       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:28:17.946250       1 main.go:299] handling current node
	I0914 00:28:27.944098       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:28:27.944135       1 main.go:299] handling current node
	I0914 00:28:37.943619       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:28:37.943655       1 main.go:299] handling current node
	I0914 00:28:47.937054       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 00:28:47.937096       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d1f5b9ef5bfa066787a6c775722496e2a1dc7a4b2f115d878a90f0c9754a6386] <==
	W0914 00:23:56.216070       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:23:57.261920       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:23:58.342619       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:23:59.375877       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:24:00.449717       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:24:01.507068       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:24:01.801224       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.150.67:443: connect: connection refused
	E0914 00:24:01.801261       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.150.67:443: connect: connection refused" logger="UnhandledError"
	W0914 00:24:01.803158       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:24:01.855366       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.150.67:443: connect: connection refused
	E0914 00:24:01.855424       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.150.67:443: connect: connection refused" logger="UnhandledError"
	W0914 00:24:01.857106       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:24:02.570028       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:24:03.667636       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:24:04.739605       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:24:05.818963       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:24:06.856669       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.98.89.59:443: connect: connection refused
	W0914 00:24:21.834609       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.150.67:443: connect: connection refused
	E0914 00:24:21.834651       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.150.67:443: connect: connection refused" logger="UnhandledError"
	W0914 00:25:01.813968       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.150.67:443: connect: connection refused
	E0914 00:25:01.814010       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.150.67:443: connect: connection refused" logger="UnhandledError"
	W0914 00:25:01.863083       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.150.67:443: connect: connection refused
	E0914 00:25:01.863128       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.150.67:443: connect: connection refused" logger="UnhandledError"
	I0914 00:25:47.187194       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0914 00:25:47.229865       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [53568138a017764dc6f802a284d64a686a04947f679e556c770cfe0277740a84] <==
	I0914 00:25:01.832271       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 00:25:01.840633       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 00:25:01.858535       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 00:25:01.873283       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 00:25:01.885596       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 00:25:01.889265       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 00:25:01.903094       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 00:25:03.083395       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 00:25:03.099016       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 00:25:04.286439       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 00:25:04.309583       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 00:25:05.292079       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 00:25:05.302912       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 00:25:05.317882       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 00:25:05.321516       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 00:25:05.334600       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 00:25:05.342931       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 00:25:29.201108       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="7.955213ms"
	I0914 00:25:29.201403       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="73.862µs"
	I0914 00:25:35.033470       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0914 00:25:35.037587       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0914 00:25:35.102204       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0914 00:25:35.102392       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0914 00:25:44.595071       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-131319"
	I0914 00:25:46.920467       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [964775faf605959f1ba5da1801a5937e9d1741e10dc4a9f3ef9ef2c052851941] <==
	I0914 00:22:47.650022       1 server_linux.go:66] "Using iptables proxy"
	I0914 00:22:47.753654       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0914 00:22:47.753717       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 00:22:47.794189       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 00:22:47.794256       1 server_linux.go:169] "Using iptables Proxier"
	I0914 00:22:47.797524       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 00:22:47.797800       1 server.go:483] "Version info" version="v1.31.1"
	I0914 00:22:47.797815       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 00:22:47.812097       1 config.go:199] "Starting service config controller"
	I0914 00:22:47.812143       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 00:22:47.812181       1 config.go:105] "Starting endpoint slice config controller"
	I0914 00:22:47.812186       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 00:22:47.812716       1 config.go:328] "Starting node config controller"
	I0914 00:22:47.812724       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 00:22:47.913131       1 shared_informer.go:320] Caches are synced for service config
	I0914 00:22:47.913206       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 00:22:47.913435       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4ba9b8f45896a6445789235b1630f5df1d05e270f1e275e7e649ca03399c90bb] <==
	W0914 00:22:38.609049       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 00:22:38.609349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:38.609116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 00:22:38.609452       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:39.471338       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 00:22:39.471591       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:39.482913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 00:22:39.483183       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:39.487378       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 00:22:39.487619       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:39.505439       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 00:22:39.505493       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:39.576166       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0914 00:22:39.576400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:39.646432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 00:22:39.646762       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:39.654984       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 00:22:39.655219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:39.785809       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 00:22:39.785934       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:39.790943       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 00:22:39.791150       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 00:22:39.844014       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 00:22:39.844258       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0914 00:22:41.984866       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 00:26:42 addons-131319 kubelet[1497]: E0914 00:26:42.264977    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9bjg_gadget(ae425b35-7f17-4690-9b72-1a07b89405a6)\"" pod="gadget/gadget-b9bjg" podUID="ae425b35-7f17-4690-9b72-1a07b89405a6"
	Sep 14 00:26:55 addons-131319 kubelet[1497]: I0914 00:26:55.264771    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-88zhs" secret="" err="secret \"gcp-auth\" not found"
	Sep 14 00:26:55 addons-131319 kubelet[1497]: I0914 00:26:55.266487    1497 scope.go:117] "RemoveContainer" containerID="108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c"
	Sep 14 00:26:55 addons-131319 kubelet[1497]: E0914 00:26:55.266845    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9bjg_gadget(ae425b35-7f17-4690-9b72-1a07b89405a6)\"" pod="gadget/gadget-b9bjg" podUID="ae425b35-7f17-4690-9b72-1a07b89405a6"
	Sep 14 00:27:07 addons-131319 kubelet[1497]: I0914 00:27:07.264423    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-xcfqw" secret="" err="secret \"gcp-auth\" not found"
	Sep 14 00:27:07 addons-131319 kubelet[1497]: I0914 00:27:07.265366    1497 scope.go:117] "RemoveContainer" containerID="108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c"
	Sep 14 00:27:07 addons-131319 kubelet[1497]: E0914 00:27:07.265519    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9bjg_gadget(ae425b35-7f17-4690-9b72-1a07b89405a6)\"" pod="gadget/gadget-b9bjg" podUID="ae425b35-7f17-4690-9b72-1a07b89405a6"
	Sep 14 00:27:16 addons-131319 kubelet[1497]: I0914 00:27:16.264924    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-thrrk" secret="" err="secret \"gcp-auth\" not found"
	Sep 14 00:27:18 addons-131319 kubelet[1497]: I0914 00:27:18.264108    1497 scope.go:117] "RemoveContainer" containerID="108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c"
	Sep 14 00:27:18 addons-131319 kubelet[1497]: E0914 00:27:18.264323    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9bjg_gadget(ae425b35-7f17-4690-9b72-1a07b89405a6)\"" pod="gadget/gadget-b9bjg" podUID="ae425b35-7f17-4690-9b72-1a07b89405a6"
	Sep 14 00:27:33 addons-131319 kubelet[1497]: I0914 00:27:33.264791    1497 scope.go:117] "RemoveContainer" containerID="108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c"
	Sep 14 00:27:33 addons-131319 kubelet[1497]: E0914 00:27:33.265486    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9bjg_gadget(ae425b35-7f17-4690-9b72-1a07b89405a6)\"" pod="gadget/gadget-b9bjg" podUID="ae425b35-7f17-4690-9b72-1a07b89405a6"
	Sep 14 00:27:48 addons-131319 kubelet[1497]: I0914 00:27:48.264448    1497 scope.go:117] "RemoveContainer" containerID="108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c"
	Sep 14 00:27:48 addons-131319 kubelet[1497]: E0914 00:27:48.265104    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9bjg_gadget(ae425b35-7f17-4690-9b72-1a07b89405a6)\"" pod="gadget/gadget-b9bjg" podUID="ae425b35-7f17-4690-9b72-1a07b89405a6"
	Sep 14 00:27:57 addons-131319 kubelet[1497]: I0914 00:27:57.264944    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-88zhs" secret="" err="secret \"gcp-auth\" not found"
	Sep 14 00:28:01 addons-131319 kubelet[1497]: I0914 00:28:01.271541    1497 scope.go:117] "RemoveContainer" containerID="108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c"
	Sep 14 00:28:01 addons-131319 kubelet[1497]: E0914 00:28:01.272282    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9bjg_gadget(ae425b35-7f17-4690-9b72-1a07b89405a6)\"" pod="gadget/gadget-b9bjg" podUID="ae425b35-7f17-4690-9b72-1a07b89405a6"
	Sep 14 00:28:12 addons-131319 kubelet[1497]: I0914 00:28:12.264519    1497 scope.go:117] "RemoveContainer" containerID="108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c"
	Sep 14 00:28:12 addons-131319 kubelet[1497]: E0914 00:28:12.264723    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9bjg_gadget(ae425b35-7f17-4690-9b72-1a07b89405a6)\"" pod="gadget/gadget-b9bjg" podUID="ae425b35-7f17-4690-9b72-1a07b89405a6"
	Sep 14 00:28:26 addons-131319 kubelet[1497]: I0914 00:28:26.264497    1497 scope.go:117] "RemoveContainer" containerID="108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c"
	Sep 14 00:28:26 addons-131319 kubelet[1497]: E0914 00:28:26.264704    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9bjg_gadget(ae425b35-7f17-4690-9b72-1a07b89405a6)\"" pod="gadget/gadget-b9bjg" podUID="ae425b35-7f17-4690-9b72-1a07b89405a6"
	Sep 14 00:28:36 addons-131319 kubelet[1497]: I0914 00:28:36.264071    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-xcfqw" secret="" err="secret \"gcp-auth\" not found"
	Sep 14 00:28:41 addons-131319 kubelet[1497]: I0914 00:28:41.265244    1497 scope.go:117] "RemoveContainer" containerID="108b7f75da8ea443242bab77e3434854cbf274bbb2af21111179638a7a7b131c"
	Sep 14 00:28:41 addons-131319 kubelet[1497]: E0914 00:28:41.265460    1497 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-b9bjg_gadget(ae425b35-7f17-4690-9b72-1a07b89405a6)\"" pod="gadget/gadget-b9bjg" podUID="ae425b35-7f17-4690-9b72-1a07b89405a6"
	Sep 14 00:28:44 addons-131319 kubelet[1497]: I0914 00:28:44.264124    1497 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-thrrk" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [8e4d3c8ad3a1e1b71e56b2501b8a3a1141385f3e2b602bfd2046b589348b7411] <==
	I0914 00:22:52.691917       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 00:22:52.708691       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 00:22:52.708746       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 00:22:52.720448       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 00:22:52.720619       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-131319_edfa33ad-d3ea-4663-8ade-7d30f437aa40!
	I0914 00:22:52.721580       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c66e31fd-742a-472f-ad13-c49d32e2f5bb", APIVersion:"v1", ResourceVersion:"579", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-131319_edfa33ad-d3ea-4663-8ade-7d30f437aa40 became leader
	I0914 00:22:52.823179       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-131319_edfa33ad-d3ea-4663-8ade-7d30f437aa40!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-131319 -n addons-131319
helpers_test.go:261: (dbg) Run:  kubectl --context addons-131319 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-h4ssl ingress-nginx-admission-patch-6x4vs test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-131319 describe pod ingress-nginx-admission-create-h4ssl ingress-nginx-admission-patch-6x4vs test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-131319 describe pod ingress-nginx-admission-create-h4ssl ingress-nginx-admission-patch-6x4vs test-job-nginx-0: exit status 1 (89.993871ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-h4ssl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6x4vs" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-131319 describe pod ingress-nginx-admission-create-h4ssl ingress-nginx-admission-patch-6x4vs test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (200.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (376.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-610182 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-610182 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m12.478523732s)

                                                
                                                
-- stdout --
	* [old-k8s-version-610182] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-610182" primary control-plane node in "old-k8s-version-610182" cluster
	* Pulling base image v0.0.45-1726243947-19640 ...
	* Restarting existing docker container for "old-k8s-version-610182" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-610182 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 01:12:13.960967 1662972 out.go:345] Setting OutFile to fd 1 ...
	I0914 01:12:13.961207 1662972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:12:13.961236 1662972 out.go:358] Setting ErrFile to fd 2...
	I0914 01:12:13.961254 1662972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:12:13.961527 1662972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
	I0914 01:12:13.961964 1662972 out.go:352] Setting JSON to false
	I0914 01:12:13.963099 1662972 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":32081,"bootTime":1726244253,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 01:12:13.963198 1662972 start.go:139] virtualization:  
	I0914 01:12:13.966867 1662972 out.go:177] * [old-k8s-version-610182] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 01:12:13.968884 1662972 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 01:12:13.968946 1662972 notify.go:220] Checking for updates...
	I0914 01:12:13.972072 1662972 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 01:12:13.973839 1662972 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 01:12:13.975362 1662972 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	I0914 01:12:13.977175 1662972 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 01:12:13.979120 1662972 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 01:12:13.981671 1662972 config.go:182] Loaded profile config "old-k8s-version-610182": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0914 01:12:13.984156 1662972 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 01:12:13.985729 1662972 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 01:12:14.025485 1662972 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 01:12:14.025621 1662972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 01:12:14.112735 1662972 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-14 01:12:14.102201962 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 01:12:14.112870 1662972 docker.go:318] overlay module found
	I0914 01:12:14.116130 1662972 out.go:177] * Using the docker driver based on existing profile
	I0914 01:12:14.121521 1662972 start.go:297] selected driver: docker
	I0914 01:12:14.121553 1662972 start.go:901] validating driver "docker" against &{Name:old-k8s-version-610182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-610182 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:12:14.121683 1662972 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 01:12:14.122325 1662972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 01:12:14.234147 1662972 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:39 OomKillDisable:true NGoroutines:51 SystemTime:2024-09-14 01:12:14.21919738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 01:12:14.234544 1662972 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:12:14.234580 1662972 cni.go:84] Creating CNI manager for ""
	I0914 01:12:14.234625 1662972 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 01:12:14.234670 1662972 start.go:340] cluster config:
	{Name:old-k8s-version-610182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-610182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:12:14.236710 1662972 out.go:177] * Starting "old-k8s-version-610182" primary control-plane node in "old-k8s-version-610182" cluster
	I0914 01:12:14.238310 1662972 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0914 01:12:14.239956 1662972 out.go:177] * Pulling base image v0.0.45-1726243947-19640 ...
	I0914 01:12:14.241336 1662972 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0914 01:12:14.241396 1662972 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0914 01:12:14.241409 1662972 cache.go:56] Caching tarball of preloaded images
	I0914 01:12:14.241487 1662972 preload.go:172] Found /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 01:12:14.241502 1662972 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0914 01:12:14.241655 1662972 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 01:12:14.241947 1662972 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/config.json ...
	W0914 01:12:14.278357 1662972 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 is of wrong architecture
	I0914 01:12:14.278376 1662972 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 01:12:14.278472 1662972 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 01:12:14.278490 1662972 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0914 01:12:14.278495 1662972 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0914 01:12:14.278503 1662972 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0914 01:12:14.278508 1662972 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from local cache
	I0914 01:12:14.403093 1662972 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from cached tarball
	I0914 01:12:14.403153 1662972 cache.go:194] Successfully downloaded all kic artifacts
	I0914 01:12:14.403185 1662972 start.go:360] acquireMachinesLock for old-k8s-version-610182: {Name:mk026e09078b7cd9b6823c250d4dff4498bea6e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:12:14.403261 1662972 start.go:364] duration metric: took 53.153µs to acquireMachinesLock for "old-k8s-version-610182"
	I0914 01:12:14.403283 1662972 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:12:14.403288 1662972 fix.go:54] fixHost starting: 
	I0914 01:12:14.403569 1662972 cli_runner.go:164] Run: docker container inspect old-k8s-version-610182 --format={{.State.Status}}
	I0914 01:12:14.426410 1662972 fix.go:112] recreateIfNeeded on old-k8s-version-610182: state=Stopped err=<nil>
	W0914 01:12:14.426445 1662972 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:12:14.428466 1662972 out.go:177] * Restarting existing docker container for "old-k8s-version-610182" ...
	I0914 01:12:14.430236 1662972 cli_runner.go:164] Run: docker start old-k8s-version-610182
	I0914 01:12:14.812618 1662972 cli_runner.go:164] Run: docker container inspect old-k8s-version-610182 --format={{.State.Status}}
	I0914 01:12:14.837617 1662972 kic.go:430] container "old-k8s-version-610182" state is running.
	I0914 01:12:14.838042 1662972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-610182
	I0914 01:12:14.878568 1662972 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/config.json ...
	I0914 01:12:14.878824 1662972 machine.go:93] provisionDockerMachine start ...
	I0914 01:12:14.879834 1662972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610182
	I0914 01:12:14.910126 1662972 main.go:141] libmachine: Using SSH client type: native
	I0914 01:12:14.910439 1662972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34914 <nil> <nil>}
	I0914 01:12:14.910457 1662972 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:12:14.911081 1662972 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0914 01:12:18.063454 1662972 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-610182
	
	I0914 01:12:18.063479 1662972 ubuntu.go:169] provisioning hostname "old-k8s-version-610182"
	I0914 01:12:18.063560 1662972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610182
	I0914 01:12:18.097769 1662972 main.go:141] libmachine: Using SSH client type: native
	I0914 01:12:18.098015 1662972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34914 <nil> <nil>}
	I0914 01:12:18.098026 1662972 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-610182 && echo "old-k8s-version-610182" | sudo tee /etc/hostname
	I0914 01:12:18.294841 1662972 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-610182
	
	I0914 01:12:18.295008 1662972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610182
	I0914 01:12:18.333158 1662972 main.go:141] libmachine: Using SSH client type: native
	I0914 01:12:18.333402 1662972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34914 <nil> <nil>}
	I0914 01:12:18.333420 1662972 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-610182' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-610182/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-610182' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:12:18.552711 1662972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:12:18.552736 1662972 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19640-1454467/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-1454467/.minikube}
	I0914 01:12:18.552909 1662972 ubuntu.go:177] setting up certificates
	I0914 01:12:18.552920 1662972 provision.go:84] configureAuth start
	I0914 01:12:18.552984 1662972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-610182
	I0914 01:12:18.646432 1662972 provision.go:143] copyHostCerts
	I0914 01:12:18.646515 1662972 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.pem, removing ...
	I0914 01:12:18.646534 1662972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.pem
	I0914 01:12:18.646656 1662972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.pem (1078 bytes)
	I0914 01:12:18.646797 1662972 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-1454467/.minikube/cert.pem, removing ...
	I0914 01:12:18.646814 1662972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-1454467/.minikube/cert.pem
	I0914 01:12:18.646864 1662972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-1454467/.minikube/cert.pem (1123 bytes)
	I0914 01:12:18.646973 1662972 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-1454467/.minikube/key.pem, removing ...
	I0914 01:12:18.647010 1662972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-1454467/.minikube/key.pem
	I0914 01:12:18.647056 1662972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-1454467/.minikube/key.pem (1679 bytes)
	I0914 01:12:18.647137 1662972 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-610182 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-610182]
	I0914 01:12:20.006130 1662972 provision.go:177] copyRemoteCerts
	I0914 01:12:20.006291 1662972 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:12:20.007047 1662972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610182
	I0914 01:12:20.050074 1662972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34914 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/old-k8s-version-610182/id_rsa Username:docker}
	I0914 01:12:20.158835 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:12:20.209645 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 01:12:20.241221 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0914 01:12:20.272436 1662972 provision.go:87] duration metric: took 1.719492062s to configureAuth
	I0914 01:12:20.272464 1662972 ubuntu.go:193] setting minikube options for container-runtime
	I0914 01:12:20.272658 1662972 config.go:182] Loaded profile config "old-k8s-version-610182": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0914 01:12:20.272673 1662972 machine.go:96] duration metric: took 5.393813753s to provisionDockerMachine
	I0914 01:12:20.272681 1662972 start.go:293] postStartSetup for "old-k8s-version-610182" (driver="docker")
	I0914 01:12:20.272696 1662972 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:12:20.272750 1662972 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:12:20.272792 1662972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610182
	I0914 01:12:20.289560 1662972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34914 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/old-k8s-version-610182/id_rsa Username:docker}
	I0914 01:12:20.387040 1662972 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:12:20.391156 1662972 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 01:12:20.391194 1662972 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 01:12:20.391220 1662972 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 01:12:20.391233 1662972 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0914 01:12:20.391245 1662972 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-1454467/.minikube/addons for local assets ...
	I0914 01:12:20.391305 1662972 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-1454467/.minikube/files for local assets ...
	I0914 01:12:20.391386 1662972 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-1454467/.minikube/files/etc/ssl/certs/14598482.pem -> 14598482.pem in /etc/ssl/certs
	I0914 01:12:20.391505 1662972 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:12:20.405553 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/files/etc/ssl/certs/14598482.pem --> /etc/ssl/certs/14598482.pem (1708 bytes)
	I0914 01:12:20.449572 1662972 start.go:296] duration metric: took 176.871244ms for postStartSetup
	I0914 01:12:20.449664 1662972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 01:12:20.449723 1662972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610182
	I0914 01:12:20.466488 1662972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34914 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/old-k8s-version-610182/id_rsa Username:docker}
	I0914 01:12:20.561657 1662972 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 01:12:20.568598 1662972 fix.go:56] duration metric: took 6.165300952s for fixHost
	I0914 01:12:20.568621 1662972 start.go:83] releasing machines lock for "old-k8s-version-610182", held for 6.165350027s
	I0914 01:12:20.568706 1662972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-610182
	I0914 01:12:20.613540 1662972 ssh_runner.go:195] Run: cat /version.json
	I0914 01:12:20.613592 1662972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610182
	I0914 01:12:20.613836 1662972 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:12:20.613928 1662972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610182
	I0914 01:12:20.641765 1662972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34914 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/old-k8s-version-610182/id_rsa Username:docker}
	I0914 01:12:20.673542 1662972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34914 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/old-k8s-version-610182/id_rsa Username:docker}
	I0914 01:12:20.747642 1662972 ssh_runner.go:195] Run: systemctl --version
	I0914 01:12:20.881095 1662972 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 01:12:20.885529 1662972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0914 01:12:20.912040 1662972 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0914 01:12:20.912122 1662972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:12:20.923338 1662972 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 01:12:20.923359 1662972 start.go:495] detecting cgroup driver to use...
	I0914 01:12:20.923391 1662972 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 01:12:20.923440 1662972 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 01:12:20.941735 1662972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 01:12:20.955720 1662972 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:12:20.955818 1662972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:12:20.970777 1662972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:12:20.987018 1662972 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:12:21.106026 1662972 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:12:21.211795 1662972 docker.go:233] disabling docker service ...
	I0914 01:12:21.211877 1662972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:12:21.229352 1662972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:12:21.249249 1662972 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:12:21.353987 1662972 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:12:21.493834 1662972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:12:21.511715 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:12:21.529417 1662972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0914 01:12:21.539976 1662972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 01:12:21.550409 1662972 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 01:12:21.550479 1662972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 01:12:21.560649 1662972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 01:12:21.570863 1662972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 01:12:21.580659 1662972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 01:12:21.590581 1662972 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:12:21.600159 1662972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 01:12:21.613053 1662972 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:12:21.622262 1662972 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:12:21.631064 1662972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:12:21.742603 1662972 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 01:12:21.954666 1662972 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0914 01:12:21.954772 1662972 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0914 01:12:21.958610 1662972 start.go:563] Will wait 60s for crictl version
	I0914 01:12:21.958702 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:12:21.962150 1662972 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:12:22.030849 1662972 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0914 01:12:22.030957 1662972 ssh_runner.go:195] Run: containerd --version
	I0914 01:12:22.053924 1662972 ssh_runner.go:195] Run: containerd --version
	I0914 01:12:22.096800 1662972 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0914 01:12:22.098593 1662972 cli_runner.go:164] Run: docker network inspect old-k8s-version-610182 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 01:12:22.114136 1662972 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0914 01:12:22.117919 1662972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:12:22.135589 1662972 kubeadm.go:883] updating cluster {Name:old-k8s-version-610182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-610182 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:12:22.135716 1662972 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0914 01:12:22.135800 1662972 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:12:22.197446 1662972 containerd.go:627] all images are preloaded for containerd runtime.
	I0914 01:12:22.197471 1662972 containerd.go:534] Images already preloaded, skipping extraction
	I0914 01:12:22.197534 1662972 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:12:22.250742 1662972 containerd.go:627] all images are preloaded for containerd runtime.
	I0914 01:12:22.250826 1662972 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:12:22.250849 1662972 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0914 01:12:22.251006 1662972 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-610182 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-610182 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:12:22.251106 1662972 ssh_runner.go:195] Run: sudo crictl info
	I0914 01:12:22.300155 1662972 cni.go:84] Creating CNI manager for ""
	I0914 01:12:22.300188 1662972 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 01:12:22.300198 1662972 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:12:22.300246 1662972 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-610182 NodeName:old-k8s-version-610182 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 01:12:22.300411 1662972 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-610182"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:12:22.300493 1662972 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 01:12:22.310324 1662972 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:12:22.310424 1662972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:12:22.320479 1662972 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0914 01:12:22.341061 1662972 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:12:22.361852 1662972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0914 01:12:22.382633 1662972 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0914 01:12:22.386690 1662972 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:12:22.399018 1662972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:12:22.517694 1662972 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:12:22.538240 1662972 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182 for IP: 192.168.76.2
	I0914 01:12:22.538265 1662972 certs.go:194] generating shared ca certs ...
	I0914 01:12:22.538282 1662972 certs.go:226] acquiring lock for ca certs: {Name:mkfaf13a8785cc44d16a85b8163136271bcd698b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:12:22.538464 1662972 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.key
	I0914 01:12:22.538534 1662972 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.key
	I0914 01:12:22.538548 1662972 certs.go:256] generating profile certs ...
	I0914 01:12:22.538652 1662972 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.key
	I0914 01:12:22.538746 1662972 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/apiserver.key.a3bf719e
	I0914 01:12:22.538827 1662972 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/proxy-client.key
	I0914 01:12:22.538964 1662972 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/1459848.pem (1338 bytes)
	W0914 01:12:22.539013 1662972 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/1459848_empty.pem, impossibly tiny 0 bytes
	I0914 01:12:22.539027 1662972 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 01:12:22.539053 1662972 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:12:22.539096 1662972 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:12:22.539139 1662972 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/key.pem (1679 bytes)
	I0914 01:12:22.539193 1662972 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/files/etc/ssl/certs/14598482.pem (1708 bytes)
	I0914 01:12:22.539879 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:12:22.627597 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:12:22.667803 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:12:22.701769 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 01:12:22.727346 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 01:12:22.752100 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:12:22.787518 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:12:22.816998 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:12:22.848573 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/files/etc/ssl/certs/14598482.pem --> /usr/share/ca-certificates/14598482.pem (1708 bytes)
	I0914 01:12:22.877488 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:12:22.906712 1662972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/1459848.pem --> /usr/share/ca-certificates/1459848.pem (1338 bytes)
	I0914 01:12:22.938409 1662972 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:12:22.962099 1662972 ssh_runner.go:195] Run: openssl version
	I0914 01:12:22.968264 1662972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14598482.pem && ln -fs /usr/share/ca-certificates/14598482.pem /etc/ssl/certs/14598482.pem"
	I0914 01:12:22.979300 1662972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14598482.pem
	I0914 01:12:22.983581 1662972 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 00:32 /usr/share/ca-certificates/14598482.pem
	I0914 01:12:22.983647 1662972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14598482.pem
	I0914 01:12:22.993212 1662972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14598482.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:12:23.004301 1662972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:12:23.018799 1662972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:12:23.023198 1662972 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 00:22 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:12:23.023317 1662972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:12:23.031562 1662972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:12:23.041558 1662972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1459848.pem && ln -fs /usr/share/ca-certificates/1459848.pem /etc/ssl/certs/1459848.pem"
	I0914 01:12:23.052187 1662972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1459848.pem
	I0914 01:12:23.055934 1662972 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 00:32 /usr/share/ca-certificates/1459848.pem
	I0914 01:12:23.056011 1662972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1459848.pem
	I0914 01:12:23.064090 1662972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1459848.pem /etc/ssl/certs/51391683.0"
	I0914 01:12:23.073918 1662972 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:12:23.078005 1662972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:12:23.084936 1662972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:12:23.091843 1662972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:12:23.098836 1662972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:12:23.105747 1662972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:12:23.112594 1662972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:12:23.119558 1662972 kubeadm.go:392] StartCluster: {Name:old-k8s-version-610182 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-610182 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:12:23.119657 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0914 01:12:23.119715 1662972 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:12:23.157243 1662972 cri.go:89] found id: "1ccdca51423fa99d2e530031643e1a3f8affe650ed219723b383e28a2eb94bc1"
	I0914 01:12:23.157263 1662972 cri.go:89] found id: "90ac534cf83568c45dec31050411849fe6c5da6ad5850b89788bc760ebd183bc"
	I0914 01:12:23.157268 1662972 cri.go:89] found id: "b43e26634f1f5a117b3a794841b670ab7524bca7f5a71359719226fb144777ee"
	I0914 01:12:23.157271 1662972 cri.go:89] found id: "d4a517b2228e77499d889671df50a45c4521d51fc2cd9e574d45970b0a7b5158"
	I0914 01:12:23.157275 1662972 cri.go:89] found id: "c745816624dd373ada13c555722ba230b7c9e389e3b6d0e4f549f5f67748e6bb"
	I0914 01:12:23.157278 1662972 cri.go:89] found id: "9ca7286663fbb643571bb09446c018e26421cf79f88726964da8abb585942833"
	I0914 01:12:23.157282 1662972 cri.go:89] found id: "9ee463a5994bbf1596e364b41c96e0062bea0e46e569fa942d3f748c34fcac51"
	I0914 01:12:23.157285 1662972 cri.go:89] found id: "470db368691dc688bd73d31b5956b04b5f9dbefd4381609d2abb94380494773e"
	I0914 01:12:23.157288 1662972 cri.go:89] found id: ""
	I0914 01:12:23.157339 1662972 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0914 01:12:23.169803 1662972 cri.go:116] JSON = null
	W0914 01:12:23.169851 1662972 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0914 01:12:23.169918 1662972 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:12:23.178741 1662972 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:12:23.178762 1662972 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:12:23.178818 1662972 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:12:23.187142 1662972 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:12:23.187618 1662972 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-610182" does not appear in /home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 01:12:23.187728 1662972 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-1454467/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-610182" cluster setting kubeconfig missing "old-k8s-version-610182" context setting]
	I0914 01:12:23.188104 1662972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/kubeconfig: {Name:mk9726361d7deb93fbb6dba7857cc3f0a8a02233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:12:23.189363 1662972 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:12:23.198216 1662972 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0914 01:12:23.198248 1662972 kubeadm.go:597] duration metric: took 19.480043ms to restartPrimaryControlPlane
	I0914 01:12:23.198257 1662972 kubeadm.go:394] duration metric: took 78.709856ms to StartCluster
	I0914 01:12:23.198276 1662972 settings.go:142] acquiring lock: {Name:mk71d0962f5f4196c9fea75fe9a601467858166a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:12:23.198331 1662972 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 01:12:23.198924 1662972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/kubeconfig: {Name:mk9726361d7deb93fbb6dba7857cc3f0a8a02233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:12:23.199116 1662972 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 01:12:23.199408 1662972 config.go:182] Loaded profile config "old-k8s-version-610182": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0914 01:12:23.199452 1662972 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:12:23.199553 1662972 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-610182"
	I0914 01:12:23.199574 1662972 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-610182"
	W0914 01:12:23.199580 1662972 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:12:23.199611 1662972 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-610182"
	I0914 01:12:23.199640 1662972 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-610182"
	I0914 01:12:23.199614 1662972 host.go:66] Checking if "old-k8s-version-610182" exists ...
	I0914 01:12:23.200011 1662972 cli_runner.go:164] Run: docker container inspect old-k8s-version-610182 --format={{.State.Status}}
	I0914 01:12:23.200329 1662972 cli_runner.go:164] Run: docker container inspect old-k8s-version-610182 --format={{.State.Status}}
	I0914 01:12:23.199641 1662972 addons.go:69] Setting dashboard=true in profile "old-k8s-version-610182"
	I0914 01:12:23.200561 1662972 addons.go:234] Setting addon dashboard=true in "old-k8s-version-610182"
	W0914 01:12:23.200570 1662972 addons.go:243] addon dashboard should already be in state true
	I0914 01:12:23.200598 1662972 host.go:66] Checking if "old-k8s-version-610182" exists ...
	I0914 01:12:23.201051 1662972 cli_runner.go:164] Run: docker container inspect old-k8s-version-610182 --format={{.State.Status}}
	I0914 01:12:23.203527 1662972 out.go:177] * Verifying Kubernetes components...
	I0914 01:12:23.199619 1662972 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-610182"
	I0914 01:12:23.203771 1662972 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-610182"
	W0914 01:12:23.203805 1662972 addons.go:243] addon metrics-server should already be in state true
	I0914 01:12:23.203983 1662972 host.go:66] Checking if "old-k8s-version-610182" exists ...
	I0914 01:12:23.204555 1662972 cli_runner.go:164] Run: docker container inspect old-k8s-version-610182 --format={{.State.Status}}
	I0914 01:12:23.205425 1662972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:12:23.225542 1662972 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:12:23.227302 1662972 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:12:23.227323 1662972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:12:23.227393 1662972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610182
	I0914 01:12:23.264155 1662972 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-610182"
	W0914 01:12:23.264177 1662972 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:12:23.264204 1662972 host.go:66] Checking if "old-k8s-version-610182" exists ...
	I0914 01:12:23.264719 1662972 cli_runner.go:164] Run: docker container inspect old-k8s-version-610182 --format={{.State.Status}}
	I0914 01:12:23.270022 1662972 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0914 01:12:23.271810 1662972 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0914 01:12:23.277111 1662972 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:12:23.277118 1662972 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0914 01:12:23.277193 1662972 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0914 01:12:23.277270 1662972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610182
	I0914 01:12:23.281355 1662972 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:12:23.281385 1662972 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:12:23.281452 1662972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610182
	I0914 01:12:23.320426 1662972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34914 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/old-k8s-version-610182/id_rsa Username:docker}
	I0914 01:12:23.322121 1662972 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:12:23.322145 1662972 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:12:23.322207 1662972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-610182
	I0914 01:12:23.336098 1662972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34914 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/old-k8s-version-610182/id_rsa Username:docker}
	I0914 01:12:23.348020 1662972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34914 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/old-k8s-version-610182/id_rsa Username:docker}
	I0914 01:12:23.366670 1662972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34914 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/old-k8s-version-610182/id_rsa Username:docker}
	I0914 01:12:23.424992 1662972 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:12:23.473712 1662972 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-610182" to be "Ready" ...
	I0914 01:12:23.544237 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:12:23.602412 1662972 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:12:23.602432 1662972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:12:23.624267 1662972 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0914 01:12:23.624331 1662972 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0914 01:12:23.684888 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:12:23.710910 1662972 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:12:23.710961 1662972 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:12:23.722880 1662972 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0914 01:12:23.722943 1662972 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0914 01:12:23.804119 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:23.804192 1662972 retry.go:31] will retry after 125.704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:23.815021 1662972 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:12:23.815087 1662972 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 01:12:23.816402 1662972 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0914 01:12:23.816458 1662972 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0914 01:12:23.881299 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:12:23.930436 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:12:23.949784 1662972 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0914 01:12:23.949872 1662972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0914 01:12:24.054804 1662972 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0914 01:12:24.054884 1662972 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0914 01:12:24.064475 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.064552 1662972 retry.go:31] will retry after 143.015246ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 01:12:24.169831 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.169987 1662972 retry.go:31] will retry after 158.219539ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.175767 1662972 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0914 01:12:24.175868 1662972 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0914 01:12:24.208641 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0914 01:12:24.222846 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.222932 1662972 retry.go:31] will retry after 345.394035ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.259845 1662972 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0914 01:12:24.259931 1662972 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0914 01:12:24.320198 1662972 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0914 01:12:24.320224 1662972 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0914 01:12:24.328547 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0914 01:12:24.375168 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.375202 1662972 retry.go:31] will retry after 401.783597ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.420918 1662972 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0914 01:12:24.420945 1662972 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0914 01:12:24.482991 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0914 01:12:24.515656 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.515686 1662972 retry.go:31] will retry after 320.475948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.568988 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0914 01:12:24.632499 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.632534 1662972 retry.go:31] will retry after 257.903015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 01:12:24.730509 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.730556 1662972 retry.go:31] will retry after 645.985712ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.777814 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:12:24.837179 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:12:24.891603 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0914 01:12:24.909044 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:24.909077 1662972 retry.go:31] will retry after 603.122216ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 01:12:25.098033 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:25.098064 1662972 retry.go:31] will retry after 406.173842ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 01:12:25.098125 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:25.098154 1662972 retry.go:31] will retry after 321.338573ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:25.377444 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:12:25.420670 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0914 01:12:25.475341 1662972 node_ready.go:53] error getting node "old-k8s-version-610182": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-610182": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 01:12:25.504675 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:12:25.512938 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0914 01:12:25.550703 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:25.550734 1662972 retry.go:31] will retry after 843.86401ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 01:12:25.821074 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:25.821108 1662972 retry.go:31] will retry after 834.83683ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 01:12:25.953520 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:25.953554 1662972 retry.go:31] will retry after 638.107044ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 01:12:25.953594 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:25.953608 1662972 retry.go:31] will retry after 910.99469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:26.394756 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0914 01:12:26.502748 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:26.502776 1662972 retry.go:31] will retry after 1.335032862s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:26.592088 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:12:26.656480 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0914 01:12:26.690624 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:26.690655 1662972 retry.go:31] will retry after 672.476204ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 01:12:26.774250 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:26.774280 1662972 retry.go:31] will retry after 481.049702ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:26.865593 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0914 01:12:26.951566 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:26.951657 1662972 retry.go:31] will retry after 1.207239917s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:27.255629 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0914 01:12:27.343089 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:27.343130 1662972 retry.go:31] will retry after 751.139534ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:27.363411 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0914 01:12:27.469717 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:27.469761 1662972 retry.go:31] will retry after 2.311190579s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:27.838708 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0914 01:12:27.941575 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:27.941611 1662972 retry.go:31] will retry after 2.216375881s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:27.975090 1662972 node_ready.go:53] error getting node "old-k8s-version-610182": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-610182": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 01:12:28.095336 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0914 01:12:28.159698 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0914 01:12:28.205310 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:28.205346 1662972 retry.go:31] will retry after 2.193865665s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 01:12:28.293495 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:28.293535 1662972 retry.go:31] will retry after 2.272584501s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:29.781255 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0914 01:12:29.873192 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:29.873225 1662972 retry.go:31] will retry after 3.001423747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:30.159131 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0914 01:12:30.267096 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:30.267126 1662972 retry.go:31] will retry after 3.59785241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:30.399475 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0914 01:12:30.475167 1662972 node_ready.go:53] error getting node "old-k8s-version-610182": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-610182": dial tcp 192.168.76.2:8443: connect: connection refused
	W0914 01:12:30.494135 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:30.494171 1662972 retry.go:31] will retry after 2.5263766s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:30.566523 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0914 01:12:30.665140 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:30.665169 1662972 retry.go:31] will retry after 4.124290969s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 01:12:32.875782 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:12:33.021692 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0914 01:12:33.866111 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:12:34.790345 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:12:42.975733 1662972 node_ready.go:53] error getting node "old-k8s-version-610182": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-610182": net/http: TLS handshake timeout
	I0914 01:12:43.228313 1662972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.352477008s)
	W0914 01:12:43.228360 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0914 01:12:43.228380 1662972 retry.go:31] will retry after 4.044477655s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0914 01:12:43.505293 1662972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.483546317s)
	W0914 01:12:43.505334 1662972 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0914 01:12:43.505353 1662972 retry.go:31] will retry after 5.221376239s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0914 01:12:43.808991 1662972 node_ready.go:49] node "old-k8s-version-610182" has status "Ready":"True"
	I0914 01:12:43.809016 1662972 node_ready.go:38] duration metric: took 20.335262798s for node "old-k8s-version-610182" to be "Ready" ...
	I0914 01:12:43.809026 1662972 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:12:44.016019 1662972 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-kbzrq" in "kube-system" namespace to be "Ready" ...
	I0914 01:12:44.286115 1662972 pod_ready.go:93] pod "coredns-74ff55c5b-kbzrq" in "kube-system" namespace has status "Ready":"True"
	I0914 01:12:44.286186 1662972 pod_ready.go:82] duration metric: took 270.069954ms for pod "coredns-74ff55c5b-kbzrq" in "kube-system" namespace to be "Ready" ...
	I0914 01:12:44.286211 1662972 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-610182" in "kube-system" namespace to be "Ready" ...
	I0914 01:12:44.361931 1662972 pod_ready.go:93] pod "etcd-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"True"
	I0914 01:12:44.361991 1662972 pod_ready.go:82] duration metric: took 75.758864ms for pod "etcd-old-k8s-version-610182" in "kube-system" namespace to be "Ready" ...
	I0914 01:12:44.362021 1662972 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-610182" in "kube-system" namespace to be "Ready" ...
	I0914 01:12:44.418820 1662972 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"True"
	I0914 01:12:44.418886 1662972 pod_ready.go:82] duration metric: took 56.842496ms for pod "kube-apiserver-old-k8s-version-610182" in "kube-system" namespace to be "Ready" ...
	I0914 01:12:44.418924 1662972 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace to be "Ready" ...
	I0914 01:12:45.499244 1662972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.633092296s)
	I0914 01:12:45.529352 1662972 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.73896849s)
	I0914 01:12:46.427604 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:12:47.273604 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:12:48.077448 1662972 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-610182"
	I0914 01:12:48.726921 1662972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0914 01:12:48.927618 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:12:49.235208 1662972 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-610182 addons enable metrics-server
	
	I0914 01:12:49.237551 1662972 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0914 01:12:49.239038 1662972 addons.go:510] duration metric: took 26.039578077s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0914 01:12:50.927942 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:12:52.936295 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:12:55.425007 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:12:57.425595 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:12:59.430272 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:01.929819 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:04.060707 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:06.429042 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:08.940568 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:11.427640 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:13.428362 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:15.428752 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:17.925312 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:20.425815 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:22.425914 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:24.925904 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:26.926348 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:29.433798 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:31.925315 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:33.931263 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:36.425328 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:38.926642 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:40.926669 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:43.426186 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:45.432511 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:47.925616 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:49.926233 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:52.424987 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:54.425911 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:56.926025 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:59.426858 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:01.925317 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:03.927601 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:05.928872 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:08.426515 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:10.926009 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:12.934519 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:14.504954 1662972 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"True"
	I0914 01:14:14.504981 1662972 pod_ready.go:82] duration metric: took 1m30.086034773s for pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:14.504994 1662972 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vmn48" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:14.511409 1662972 pod_ready.go:93] pod "kube-proxy-vmn48" in "kube-system" namespace has status "Ready":"True"
	I0914 01:14:14.511438 1662972 pod_ready.go:82] duration metric: took 6.436329ms for pod "kube-proxy-vmn48" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:14.511450 1662972 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-610182" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:14.523506 1662972 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"True"
	I0914 01:14:14.523532 1662972 pod_ready.go:82] duration metric: took 12.073745ms for pod "kube-scheduler-old-k8s-version-610182" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:14.523546 1662972 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:16.531960 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:19.030743 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:21.034860 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:23.530917 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:26.031465 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:28.032205 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:30.036949 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:32.530684 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:35.032406 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:37.529934 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:40.035056 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:42.529748 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:44.530577 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:47.030176 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:49.030855 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:51.529721 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:53.530474 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:56.030663 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:58.031115 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:00.034900 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:02.530721 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:05.030123 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:07.030239 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:09.030830 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:11.530356 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:14.029924 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:16.031991 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:18.530052 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:20.530291 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:23.030657 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:25.130347 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:27.530650 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:30.065372 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:32.531328 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:35.030938 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:37.031230 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:39.031679 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:41.530777 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:43.530952 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:46.030225 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:48.030814 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:50.530115 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:52.530309 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:55.031142 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:57.529714 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:59.531999 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:02.031409 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:04.530798 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:06.530939 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:09.030593 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:11.031171 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:13.532673 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:15.568258 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:18.030448 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:20.033651 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:22.531036 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:25.030886 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:27.530036 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:29.530901 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:32.030132 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:34.033335 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:36.529970 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:39.030797 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:41.529716 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:43.529988 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:46.030692 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:48.031278 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:50.529959 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:53.030950 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:55.031388 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:57.530385 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:00.058274 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:02.530500 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:05.034995 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:07.530822 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:10.032304 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:12.529655 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:14.530193 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:16.530600 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:18.530680 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:21.030349 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:23.530076 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:25.531418 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:28.030860 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:30.033042 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:32.530703 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:35.031474 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:37.032071 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:39.529681 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:41.530452 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:43.530637 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:46.030780 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:48.531142 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:51.031064 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:53.031651 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:55.529303 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:57.530301 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:59.536331 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:02.030690 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:04.530187 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:06.618640 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:09.030888 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:11.030940 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:13.529684 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:14.530105 1662972 pod_ready.go:82] duration metric: took 4m0.006544472s for pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace to be "Ready" ...
	E0914 01:18:14.530129 1662972 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:18:14.530138 1662972 pod_ready.go:39] duration metric: took 5m30.721102029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:18:14.530152 1662972 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:18:14.530192 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:18:14.530257 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:18:14.570490 1662972 cri.go:89] found id: "a8213321a49b65c4449a49fd8155be47c3e9743c8f95a001adb67b9fbfaa7501"
	I0914 01:18:14.570513 1662972 cri.go:89] found id: "c745816624dd373ada13c555722ba230b7c9e389e3b6d0e4f549f5f67748e6bb"
	I0914 01:18:14.570518 1662972 cri.go:89] found id: ""
	I0914 01:18:14.570526 1662972 logs.go:276] 2 containers: [a8213321a49b65c4449a49fd8155be47c3e9743c8f95a001adb67b9fbfaa7501 c745816624dd373ada13c555722ba230b7c9e389e3b6d0e4f549f5f67748e6bb]
	I0914 01:18:14.570582 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.574374 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.577818 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0914 01:18:14.577885 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:18:14.620151 1662972 cri.go:89] found id: "f1adefb5809402611e02118e987a812f2bd5acdc1959872d264a2266f122241d"
	I0914 01:18:14.620172 1662972 cri.go:89] found id: "470db368691dc688bd73d31b5956b04b5f9dbefd4381609d2abb94380494773e"
	I0914 01:18:14.620177 1662972 cri.go:89] found id: ""
	I0914 01:18:14.620184 1662972 logs.go:276] 2 containers: [f1adefb5809402611e02118e987a812f2bd5acdc1959872d264a2266f122241d 470db368691dc688bd73d31b5956b04b5f9dbefd4381609d2abb94380494773e]
	I0914 01:18:14.620247 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.623767 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.627945 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0914 01:18:14.628016 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:18:14.671644 1662972 cri.go:89] found id: "031b10e8b5319fea998363415b3b511d9dbdb6b5dcf822b26400d2b3681bb5fe"
	I0914 01:18:14.671715 1662972 cri.go:89] found id: "1ccdca51423fa99d2e530031643e1a3f8affe650ed219723b383e28a2eb94bc1"
	I0914 01:18:14.671735 1662972 cri.go:89] found id: ""
	I0914 01:18:14.671760 1662972 logs.go:276] 2 containers: [031b10e8b5319fea998363415b3b511d9dbdb6b5dcf822b26400d2b3681bb5fe 1ccdca51423fa99d2e530031643e1a3f8affe650ed219723b383e28a2eb94bc1]
	I0914 01:18:14.671903 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.675496 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.678883 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:18:14.679001 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:18:14.718833 1662972 cri.go:89] found id: "4c2d74f70880659a29267038a079647917ca2d99bfc511f1bcea43c7917c095d"
	I0914 01:18:14.718855 1662972 cri.go:89] found id: "9ee463a5994bbf1596e364b41c96e0062bea0e46e569fa942d3f748c34fcac51"
	I0914 01:18:14.718860 1662972 cri.go:89] found id: ""
	I0914 01:18:14.718868 1662972 logs.go:276] 2 containers: [4c2d74f70880659a29267038a079647917ca2d99bfc511f1bcea43c7917c095d 9ee463a5994bbf1596e364b41c96e0062bea0e46e569fa942d3f748c34fcac51]
	I0914 01:18:14.718921 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.722682 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.726402 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:18:14.726482 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:18:14.780039 1662972 cri.go:89] found id: "269d5b982d5d4a022dc3577be6403979f7298b0b3fc813ef12e6907953b41c43"
	I0914 01:18:14.780064 1662972 cri.go:89] found id: "d4a517b2228e77499d889671df50a45c4521d51fc2cd9e574d45970b0a7b5158"
	I0914 01:18:14.780069 1662972 cri.go:89] found id: ""
	I0914 01:18:14.780076 1662972 logs.go:276] 2 containers: [269d5b982d5d4a022dc3577be6403979f7298b0b3fc813ef12e6907953b41c43 d4a517b2228e77499d889671df50a45c4521d51fc2cd9e574d45970b0a7b5158]
	I0914 01:18:14.780140 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.783631 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.788015 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:18:14.788084 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:18:14.833303 1662972 cri.go:89] found id: "61e7b1ed81983f8feb0c4985df8852de5a355a343c2f2bac727eddd38d326e49"
	I0914 01:18:14.833329 1662972 cri.go:89] found id: "9ca7286663fbb643571bb09446c018e26421cf79f88726964da8abb585942833"
	I0914 01:18:14.833334 1662972 cri.go:89] found id: ""
	I0914 01:18:14.833341 1662972 logs.go:276] 2 containers: [61e7b1ed81983f8feb0c4985df8852de5a355a343c2f2bac727eddd38d326e49 9ca7286663fbb643571bb09446c018e26421cf79f88726964da8abb585942833]
	I0914 01:18:14.833559 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.844709 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.848783 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0914 01:18:14.848885 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:18:14.898099 1662972 cri.go:89] found id: "0baa6680fbdc78d3ce9ed5a376321b45078166580f6d09f833f0a479f6555f1f"
	I0914 01:18:14.898132 1662972 cri.go:89] found id: "90ac534cf83568c45dec31050411849fe6c5da6ad5850b89788bc760ebd183bc"
	I0914 01:18:14.898137 1662972 cri.go:89] found id: ""
	I0914 01:18:14.898145 1662972 logs.go:276] 2 containers: [0baa6680fbdc78d3ce9ed5a376321b45078166580f6d09f833f0a479f6555f1f 90ac534cf83568c45dec31050411849fe6c5da6ad5850b89788bc760ebd183bc]
	I0914 01:18:14.898216 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.901985 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.905346 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:18:14.905480 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:18:14.955707 1662972 cri.go:89] found id: "0ee7f8c793d3af17f4104c267467b65d90ef7bc7c809c0c787cfa6261c9a806b"
	I0914 01:18:14.955732 1662972 cri.go:89] found id: "a87d77c89dfdfee2cffaf479b6efc6afa9d07d45268f21126c7d19ec57c7bf8c"
	I0914 01:18:14.955737 1662972 cri.go:89] found id: ""
	I0914 01:18:14.955745 1662972 logs.go:276] 2 containers: [0ee7f8c793d3af17f4104c267467b65d90ef7bc7c809c0c787cfa6261c9a806b a87d77c89dfdfee2cffaf479b6efc6afa9d07d45268f21126c7d19ec57c7bf8c]
	I0914 01:18:14.955836 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.959640 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.963256 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:18:14.963421 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:18:15.011451 1662972 cri.go:89] found id: "0fffc84db99e3cdc18ff01574902ffd49a1b6d96ad7fc3649f3b141734861d74"
	I0914 01:18:15.011474 1662972 cri.go:89] found id: ""
	I0914 01:18:15.011482 1662972 logs.go:276] 1 containers: [0fffc84db99e3cdc18ff01574902ffd49a1b6d96ad7fc3649f3b141734861d74]
	I0914 01:18:15.011551 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:15.017064 1662972 logs.go:123] Gathering logs for kindnet [0baa6680fbdc78d3ce9ed5a376321b45078166580f6d09f833f0a479f6555f1f] ...
	I0914 01:18:15.017089 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0baa6680fbdc78d3ce9ed5a376321b45078166580f6d09f833f0a479f6555f1f"
	I0914 01:18:15.100853 1662972 logs.go:123] Gathering logs for container status ...
	I0914 01:18:15.100896 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:18:15.167001 1662972 logs.go:123] Gathering logs for kubelet ...
	I0914 01:18:15.167031 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0914 01:18:15.221886 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.810065     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.222124 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.861338     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-bgbkw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-bgbkw" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.222341 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.861518     661 reflector.go:138] object-"kube-system"/"kindnet-token-8vvgq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-8vvgq" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.222553 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.861743     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.222773 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.862004     661 reflector.go:138] object-"kube-system"/"metrics-server-token-726vd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-726vd" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.223002 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.862931     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-92cq9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-92cq9" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.223216 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.863007     661 reflector.go:138] object-"default"/"default-token-7p2wq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-7p2wq" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.223430 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.902284     661 reflector.go:138] object-"kube-system"/"coredns-token-4q2zj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-4q2zj" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.234726 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:47 old-k8s-version-610182 kubelet[661]: E0914 01:12:47.958549     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 01:18:15.234923 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:48 old-k8s-version-610182 kubelet[661]: E0914 01:12:48.502192     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.237810 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:59 old-k8s-version-610182 kubelet[661]: E0914 01:12:59.140474     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 01:18:15.239760 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:10 old-k8s-version-610182 kubelet[661]: E0914 01:13:10.593934     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.240600 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:11 old-k8s-version-610182 kubelet[661]: E0914 01:13:11.608416     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.240930 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:12 old-k8s-version-610182 kubelet[661]: E0914 01:13:12.613648     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.241117 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:14 old-k8s-version-610182 kubelet[661]: E0914 01:13:14.125122     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.241617 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:17 old-k8s-version-610182 kubelet[661]: E0914 01:13:17.635836     661 pod_workers.go:191] Error syncing pod a28fbbc7-3a81-496e-89e0-9e6d1f672574 ("storage-provisioner_kube-system(a28fbbc7-3a81-496e-89e0-9e6d1f672574)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a28fbbc7-3a81-496e-89e0-9e6d1f672574)"
	W0914 01:18:15.244534 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:27 old-k8s-version-610182 kubelet[661]: E0914 01:13:27.139662     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 01:18:15.245003 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:27 old-k8s-version-610182 kubelet[661]: E0914 01:13:27.667558     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.245462 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:31 old-k8s-version-610182 kubelet[661]: E0914 01:13:31.709183     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.245648 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:38 old-k8s-version-610182 kubelet[661]: E0914 01:13:38.128130     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.245976 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:43 old-k8s-version-610182 kubelet[661]: E0914 01:13:43.125611     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.246166 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:49 old-k8s-version-610182 kubelet[661]: E0914 01:13:49.125262     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.246754 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:55 old-k8s-version-610182 kubelet[661]: E0914 01:13:55.743007     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.247081 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:01 old-k8s-version-610182 kubelet[661]: E0914 01:14:01.711789     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.247268 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:02 old-k8s-version-610182 kubelet[661]: E0914 01:14:02.125283     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.249746 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:13 old-k8s-version-610182 kubelet[661]: E0914 01:14:13.143560     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 01:18:15.250079 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:15 old-k8s-version-610182 kubelet[661]: E0914 01:14:15.125566     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.250266 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:24 old-k8s-version-610182 kubelet[661]: E0914 01:14:24.125321     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.250594 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:26 old-k8s-version-610182 kubelet[661]: E0914 01:14:26.124714     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.250778 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:36 old-k8s-version-610182 kubelet[661]: E0914 01:14:36.125255     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.251367 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:40 old-k8s-version-610182 kubelet[661]: E0914 01:14:40.892715     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.251699 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:41 old-k8s-version-610182 kubelet[661]: E0914 01:14:41.896733     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.251890 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:51 old-k8s-version-610182 kubelet[661]: E0914 01:14:51.125970     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.252226 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:53 old-k8s-version-610182 kubelet[661]: E0914 01:14:53.124967     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.252411 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:03 old-k8s-version-610182 kubelet[661]: E0914 01:15:03.126474     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.252737 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:04 old-k8s-version-610182 kubelet[661]: E0914 01:15:04.124793     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.253063 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:16 old-k8s-version-610182 kubelet[661]: E0914 01:15:16.125211     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.253249 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:18 old-k8s-version-610182 kubelet[661]: E0914 01:15:18.125327     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.253575 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:28 old-k8s-version-610182 kubelet[661]: E0914 01:15:28.124764     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.253759 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:33 old-k8s-version-610182 kubelet[661]: E0914 01:15:33.125218     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.254090 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:41 old-k8s-version-610182 kubelet[661]: E0914 01:15:41.125092     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.256560 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:48 old-k8s-version-610182 kubelet[661]: E0914 01:15:48.133681     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 01:18:15.256889 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:54 old-k8s-version-610182 kubelet[661]: E0914 01:15:54.124777     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.257074 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:02 old-k8s-version-610182 kubelet[661]: E0914 01:16:02.125622     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.257686 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:08 old-k8s-version-610182 kubelet[661]: E0914 01:16:08.301537     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.258070 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:11 old-k8s-version-610182 kubelet[661]: E0914 01:16:11.707000     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.258290 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:17 old-k8s-version-610182 kubelet[661]: E0914 01:16:17.125469     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.258643 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:24 old-k8s-version-610182 kubelet[661]: E0914 01:16:24.124908     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.258876 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:29 old-k8s-version-610182 kubelet[661]: E0914 01:16:29.125787     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.259248 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:38 old-k8s-version-610182 kubelet[661]: E0914 01:16:38.124719     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.259439 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:41 old-k8s-version-610182 kubelet[661]: E0914 01:16:41.125876     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.259784 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:52 old-k8s-version-610182 kubelet[661]: E0914 01:16:52.125629     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.260030 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:52 old-k8s-version-610182 kubelet[661]: E0914 01:16:52.127069     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.260398 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:03 old-k8s-version-610182 kubelet[661]: E0914 01:17:03.130808     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.260587 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:07 old-k8s-version-610182 kubelet[661]: E0914 01:17:07.125991     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.260915 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:16 old-k8s-version-610182 kubelet[661]: E0914 01:17:16.128735     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.261104 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:20 old-k8s-version-610182 kubelet[661]: E0914 01:17:20.125870     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.261431 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:27 old-k8s-version-610182 kubelet[661]: E0914 01:17:27.125513     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.261625 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:35 old-k8s-version-610182 kubelet[661]: E0914 01:17:35.127070     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.261951 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:42 old-k8s-version-610182 kubelet[661]: E0914 01:17:42.125510     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.262134 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:49 old-k8s-version-610182 kubelet[661]: E0914 01:17:49.125114     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.262460 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:54 old-k8s-version-610182 kubelet[661]: E0914 01:17:54.125059     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.262645 1662972 logs.go:138] Found kubelet problem: Sep 14 01:18:00 old-k8s-version-610182 kubelet[661]: E0914 01:18:00.129170     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.262970 1662972 logs.go:138] Found kubelet problem: Sep 14 01:18:08 old-k8s-version-610182 kubelet[661]: E0914 01:18:08.124748     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.263154 1662972 logs.go:138] Found kubelet problem: Sep 14 01:18:12 old-k8s-version-610182 kubelet[661]: E0914 01:18:12.125811     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0914 01:18:15.263164 1662972 logs.go:123] Gathering logs for kube-proxy [269d5b982d5d4a022dc3577be6403979f7298b0b3fc813ef12e6907953b41c43] ...
	I0914 01:18:15.263179 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 269d5b982d5d4a022dc3577be6403979f7298b0b3fc813ef12e6907953b41c43"
	I0914 01:18:15.310492 1662972 logs.go:123] Gathering logs for kube-controller-manager [61e7b1ed81983f8feb0c4985df8852de5a355a343c2f2bac727eddd38d326e49] ...
	I0914 01:18:15.310529 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61e7b1ed81983f8feb0c4985df8852de5a355a343c2f2bac727eddd38d326e49"
	I0914 01:18:15.371658 1662972 logs.go:123] Gathering logs for kindnet [90ac534cf83568c45dec31050411849fe6c5da6ad5850b89788bc760ebd183bc] ...
	I0914 01:18:15.371694 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90ac534cf83568c45dec31050411849fe6c5da6ad5850b89788bc760ebd183bc"
	I0914 01:18:15.416411 1662972 logs.go:123] Gathering logs for storage-provisioner [0ee7f8c793d3af17f4104c267467b65d90ef7bc7c809c0c787cfa6261c9a806b] ...
	I0914 01:18:15.416445 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ee7f8c793d3af17f4104c267467b65d90ef7bc7c809c0c787cfa6261c9a806b"
	I0914 01:18:15.456135 1662972 logs.go:123] Gathering logs for containerd ...
	I0914 01:18:15.456163 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0914 01:18:15.516571 1662972 logs.go:123] Gathering logs for dmesg ...
	I0914 01:18:15.516607 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:18:15.539356 1662972 logs.go:123] Gathering logs for coredns [1ccdca51423fa99d2e530031643e1a3f8affe650ed219723b383e28a2eb94bc1] ...
	I0914 01:18:15.539386 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ccdca51423fa99d2e530031643e1a3f8affe650ed219723b383e28a2eb94bc1"
	I0914 01:18:15.588866 1662972 logs.go:123] Gathering logs for kube-scheduler [4c2d74f70880659a29267038a079647917ca2d99bfc511f1bcea43c7917c095d] ...
	I0914 01:18:15.588897 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2d74f70880659a29267038a079647917ca2d99bfc511f1bcea43c7917c095d"
	I0914 01:18:15.633070 1662972 logs.go:123] Gathering logs for kube-scheduler [9ee463a5994bbf1596e364b41c96e0062bea0e46e569fa942d3f748c34fcac51] ...
	I0914 01:18:15.633097 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ee463a5994bbf1596e364b41c96e0062bea0e46e569fa942d3f748c34fcac51"
	I0914 01:18:15.684406 1662972 logs.go:123] Gathering logs for kube-controller-manager [9ca7286663fbb643571bb09446c018e26421cf79f88726964da8abb585942833] ...
	I0914 01:18:15.684438 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ca7286663fbb643571bb09446c018e26421cf79f88726964da8abb585942833"
	I0914 01:18:15.760067 1662972 logs.go:123] Gathering logs for kube-apiserver [a8213321a49b65c4449a49fd8155be47c3e9743c8f95a001adb67b9fbfaa7501] ...
	I0914 01:18:15.760104 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8213321a49b65c4449a49fd8155be47c3e9743c8f95a001adb67b9fbfaa7501"
	I0914 01:18:15.825673 1662972 logs.go:123] Gathering logs for coredns [031b10e8b5319fea998363415b3b511d9dbdb6b5dcf822b26400d2b3681bb5fe] ...
	I0914 01:18:15.825707 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 031b10e8b5319fea998363415b3b511d9dbdb6b5dcf822b26400d2b3681bb5fe"
	I0914 01:18:15.872296 1662972 logs.go:123] Gathering logs for etcd [f1adefb5809402611e02118e987a812f2bd5acdc1959872d264a2266f122241d] ...
	I0914 01:18:15.872324 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1adefb5809402611e02118e987a812f2bd5acdc1959872d264a2266f122241d"
	I0914 01:18:15.919166 1662972 logs.go:123] Gathering logs for etcd [470db368691dc688bd73d31b5956b04b5f9dbefd4381609d2abb94380494773e] ...
	I0914 01:18:15.919198 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 470db368691dc688bd73d31b5956b04b5f9dbefd4381609d2abb94380494773e"
	I0914 01:18:15.975449 1662972 logs.go:123] Gathering logs for kube-proxy [d4a517b2228e77499d889671df50a45c4521d51fc2cd9e574d45970b0a7b5158] ...
	I0914 01:18:15.975476 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4a517b2228e77499d889671df50a45c4521d51fc2cd9e574d45970b0a7b5158"
	I0914 01:18:16.022259 1662972 logs.go:123] Gathering logs for storage-provisioner [a87d77c89dfdfee2cffaf479b6efc6afa9d07d45268f21126c7d19ec57c7bf8c] ...
	I0914 01:18:16.022295 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a87d77c89dfdfee2cffaf479b6efc6afa9d07d45268f21126c7d19ec57c7bf8c"
	I0914 01:18:16.073521 1662972 logs.go:123] Gathering logs for kubernetes-dashboard [0fffc84db99e3cdc18ff01574902ffd49a1b6d96ad7fc3649f3b141734861d74] ...
	I0914 01:18:16.073552 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0fffc84db99e3cdc18ff01574902ffd49a1b6d96ad7fc3649f3b141734861d74"
	I0914 01:18:16.124517 1662972 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:18:16.124549 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:18:16.274978 1662972 logs.go:123] Gathering logs for kube-apiserver [c745816624dd373ada13c555722ba230b7c9e389e3b6d0e4f549f5f67748e6bb] ...
	I0914 01:18:16.275007 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c745816624dd373ada13c555722ba230b7c9e389e3b6d0e4f549f5f67748e6bb"
	I0914 01:18:16.336006 1662972 out.go:358] Setting ErrFile to fd 2...
	I0914 01:18:16.336039 1662972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0914 01:18:16.336095 1662972 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0914 01:18:16.336110 1662972 out.go:270]   Sep 14 01:17:49 old-k8s-version-610182 kubelet[661]: E0914 01:17:49.125114     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 14 01:17:49 old-k8s-version-610182 kubelet[661]: E0914 01:17:49.125114     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:16.336117 1662972 out.go:270]   Sep 14 01:17:54 old-k8s-version-610182 kubelet[661]: E0914 01:17:54.125059     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	  Sep 14 01:17:54 old-k8s-version-610182 kubelet[661]: E0914 01:17:54.125059     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:16.336137 1662972 out.go:270]   Sep 14 01:18:00 old-k8s-version-610182 kubelet[661]: E0914 01:18:00.129170     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 14 01:18:00 old-k8s-version-610182 kubelet[661]: E0914 01:18:00.129170     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:16.336145 1662972 out.go:270]   Sep 14 01:18:08 old-k8s-version-610182 kubelet[661]: E0914 01:18:08.124748     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	  Sep 14 01:18:08 old-k8s-version-610182 kubelet[661]: E0914 01:18:08.124748     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:16.336155 1662972 out.go:270]   Sep 14 01:18:12 old-k8s-version-610182 kubelet[661]: E0914 01:18:12.125811     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 14 01:18:12 old-k8s-version-610182 kubelet[661]: E0914 01:18:12.125811     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0914 01:18:16.336162 1662972 out.go:358] Setting ErrFile to fd 2...
	I0914 01:18:16.336168 1662972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:18:26.338001 1662972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:18:26.351313 1662972 api_server.go:72] duration metric: took 6m3.152157656s to wait for apiserver process to appear ...
	I0914 01:18:26.351335 1662972 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:18:26.354233 1662972 out.go:201] 
	W0914 01:18:26.356293 1662972 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0914 01:18:26.356316 1662972 out.go:270] * 
	* 
	W0914 01:18:26.357206 1662972 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 01:18:26.359110 1662972 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-610182 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-610182
helpers_test.go:235: (dbg) docker inspect old-k8s-version-610182:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7a06be5923be41e9dccf5ce1d188f970b6437955f94c59b665107c164437b82d",
	        "Created": "2024-09-14T01:09:23.600384236Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1663248,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-14T01:12:14.577941955Z",
	            "FinishedAt": "2024-09-14T01:12:13.356004863Z"
	        },
	        "Image": "sha256:fe3365929e6ce54b4c06f0bc3d1500dff08f535844ef4978f2c45cd67c542134",
	        "ResolvConfPath": "/var/lib/docker/containers/7a06be5923be41e9dccf5ce1d188f970b6437955f94c59b665107c164437b82d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7a06be5923be41e9dccf5ce1d188f970b6437955f94c59b665107c164437b82d/hostname",
	        "HostsPath": "/var/lib/docker/containers/7a06be5923be41e9dccf5ce1d188f970b6437955f94c59b665107c164437b82d/hosts",
	        "LogPath": "/var/lib/docker/containers/7a06be5923be41e9dccf5ce1d188f970b6437955f94c59b665107c164437b82d/7a06be5923be41e9dccf5ce1d188f970b6437955f94c59b665107c164437b82d-json.log",
	        "Name": "/old-k8s-version-610182",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-610182:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-610182",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/74fea72ba157708a55c84a343243b9d2bdebc437f5274da96198140f6dbbcce3-init/diff:/var/lib/docker/overlay2/6c8a90774455b3f13d96b15ce5fd57cf56a284df68ee1777efc5fdfa6d28e51f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/74fea72ba157708a55c84a343243b9d2bdebc437f5274da96198140f6dbbcce3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/74fea72ba157708a55c84a343243b9d2bdebc437f5274da96198140f6dbbcce3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/74fea72ba157708a55c84a343243b9d2bdebc437f5274da96198140f6dbbcce3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-610182",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-610182/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-610182",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-610182",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-610182",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6a577736de15c9eb6bcf55162f5d11d85ec5bc3d3cc05c64517d1c2e248e45b1",
	            "SandboxKey": "/var/run/docker/netns/6a577736de15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34914"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34915"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34918"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34916"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34917"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-610182": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ac1801138b0d1bf0bde9c077e39f0307e94cacb1b1c7b1c8d6421696d2af2663",
	                    "EndpointID": "a7468a195b37981602533928ff02c6d2c619e06c8ee803fd516c855520f2bc33",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-610182",
	                        "7a06be5923be"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-610182 -n old-k8s-version-610182
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-610182 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-610182 logs -n 25: (2.392189534s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-361936 sudo crio                             | cilium-361936             | jenkins | v1.34.0 | 14 Sep 24 01:07 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-361936                                       | cilium-361936             | jenkins | v1.34.0 | 14 Sep 24 01:07 UTC | 14 Sep 24 01:07 UTC |
	| start   | -p force-systemd-env-094130                            | force-systemd-env-094130  | jenkins | v1.34.0 | 14 Sep 24 01:07 UTC | 14 Sep 24 01:08 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-627102                           | force-systemd-flag-627102 | jenkins | v1.34.0 | 14 Sep 24 01:07 UTC | 14 Sep 24 01:08 UTC |
	|         | --memory=2048 --force-systemd                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-627102                              | force-systemd-flag-627102 | jenkins | v1.34.0 | 14 Sep 24 01:08 UTC | 14 Sep 24 01:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-627102                           | force-systemd-flag-627102 | jenkins | v1.34.0 | 14 Sep 24 01:08 UTC | 14 Sep 24 01:08 UTC |
	| start   | -p cert-expiration-547976                              | cert-expiration-547976    | jenkins | v1.34.0 | 14 Sep 24 01:08 UTC | 14 Sep 24 01:09 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-094130                               | force-systemd-env-094130  | jenkins | v1.34.0 | 14 Sep 24 01:08 UTC | 14 Sep 24 01:08 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-094130                            | force-systemd-env-094130  | jenkins | v1.34.0 | 14 Sep 24 01:08 UTC | 14 Sep 24 01:08 UTC |
	| start   | -p cert-options-229583                                 | cert-options-229583       | jenkins | v1.34.0 | 14 Sep 24 01:08 UTC | 14 Sep 24 01:09 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-229583 ssh                                | cert-options-229583       | jenkins | v1.34.0 | 14 Sep 24 01:09 UTC | 14 Sep 24 01:09 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-229583 -- sudo                         | cert-options-229583       | jenkins | v1.34.0 | 14 Sep 24 01:09 UTC | 14 Sep 24 01:09 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-229583                                 | cert-options-229583       | jenkins | v1.34.0 | 14 Sep 24 01:09 UTC | 14 Sep 24 01:09 UTC |
	| start   | -p old-k8s-version-610182                              | old-k8s-version-610182    | jenkins | v1.34.0 | 14 Sep 24 01:09 UTC | 14 Sep 24 01:11 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-610182        | old-k8s-version-610182    | jenkins | v1.34.0 | 14 Sep 24 01:11 UTC | 14 Sep 24 01:12 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-610182                              | old-k8s-version-610182    | jenkins | v1.34.0 | 14 Sep 24 01:12 UTC | 14 Sep 24 01:12 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| start   | -p cert-expiration-547976                              | cert-expiration-547976    | jenkins | v1.34.0 | 14 Sep 24 01:12 UTC | 14 Sep 24 01:12 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-610182             | old-k8s-version-610182    | jenkins | v1.34.0 | 14 Sep 24 01:12 UTC | 14 Sep 24 01:12 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-610182                              | old-k8s-version-610182    | jenkins | v1.34.0 | 14 Sep 24 01:12 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-547976                              | cert-expiration-547976    | jenkins | v1.34.0 | 14 Sep 24 01:12 UTC | 14 Sep 24 01:12 UTC |
	| start   | -p no-preload-772888                                   | no-preload-772888         | jenkins | v1.34.0 | 14 Sep 24 01:12 UTC | 14 Sep 24 01:13 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-772888             | no-preload-772888         | jenkins | v1.34.0 | 14 Sep 24 01:13 UTC | 14 Sep 24 01:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-772888                                   | no-preload-772888         | jenkins | v1.34.0 | 14 Sep 24 01:13 UTC | 14 Sep 24 01:13 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-772888                  | no-preload-772888         | jenkins | v1.34.0 | 14 Sep 24 01:13 UTC | 14 Sep 24 01:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-772888                                   | no-preload-772888         | jenkins | v1.34.0 | 14 Sep 24 01:13 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 01:13:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 01:13:58.613358 1670693 out.go:345] Setting OutFile to fd 1 ...
	I0914 01:13:58.613491 1670693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:13:58.613502 1670693 out.go:358] Setting ErrFile to fd 2...
	I0914 01:13:58.613508 1670693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:13:58.613890 1670693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
	I0914 01:13:58.614368 1670693 out.go:352] Setting JSON to false
	I0914 01:13:58.615604 1670693 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":32186,"bootTime":1726244253,"procs":236,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 01:13:58.615708 1670693 start.go:139] virtualization:  
	I0914 01:13:58.618429 1670693 out.go:177] * [no-preload-772888] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 01:13:58.620049 1670693 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 01:13:58.620224 1670693 notify.go:220] Checking for updates...
	I0914 01:13:58.623616 1670693 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 01:13:58.625518 1670693 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 01:13:58.627348 1670693 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	I0914 01:13:58.629156 1670693 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 01:13:58.631056 1670693 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 01:13:58.633319 1670693 config.go:182] Loaded profile config "no-preload-772888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 01:13:58.633968 1670693 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 01:13:58.668725 1670693 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 01:13:58.668895 1670693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 01:13:58.728101 1670693 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-14 01:13:58.716478258 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 01:13:58.728214 1670693 docker.go:318] overlay module found
	I0914 01:13:58.731424 1670693 out.go:177] * Using the docker driver based on existing profile
	I0914 01:13:58.733094 1670693 start.go:297] selected driver: docker
	I0914 01:13:58.733116 1670693 start.go:901] validating driver "docker" against &{Name:no-preload-772888 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-772888 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:13:58.733231 1670693 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 01:13:58.733854 1670693 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 01:13:58.796386 1670693 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-14 01:13:58.786467189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 01:13:58.796845 1670693 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 01:13:58.796878 1670693 cni.go:84] Creating CNI manager for ""
	I0914 01:13:58.796920 1670693 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 01:13:58.796971 1670693 start.go:340] cluster config:
	{Name:no-preload-772888 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-772888 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:13:58.800440 1670693 out.go:177] * Starting "no-preload-772888" primary control-plane node in "no-preload-772888" cluster
	I0914 01:13:58.802403 1670693 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0914 01:13:58.804117 1670693 out.go:177] * Pulling base image v0.0.45-1726243947-19640 ...
	I0914 01:13:54.425911 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:56.926025 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:13:58.806107 1670693 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 01:13:58.806188 1670693 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 01:13:58.806254 1670693 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/config.json ...
	I0914 01:13:58.806551 1670693 cache.go:107] acquiring lock: {Name:mkd07bc7e344927e23868fdfaebaa9da6ee6c6ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:13:58.806635 1670693 cache.go:115] /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0914 01:13:58.806646 1670693 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 100.471µs
	I0914 01:13:58.806655 1670693 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0914 01:13:58.806666 1670693 cache.go:107] acquiring lock: {Name:mkd8160ffc9ad31ffa8b8b4d49b412a0a90ca7ca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:13:58.806717 1670693 cache.go:115] /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0914 01:13:58.806727 1670693 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 62.54µs
	I0914 01:13:58.806734 1670693 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0914 01:13:58.806751 1670693 cache.go:107] acquiring lock: {Name:mk962498fe7d1aaa9ac6c7cf014109028db6062d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:13:58.806784 1670693 cache.go:115] /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0914 01:13:58.806789 1670693 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 39.885µs
	I0914 01:13:58.807054 1670693 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0914 01:13:58.806892 1670693 cache.go:107] acquiring lock: {Name:mk2f8589b4c2e71e356b1358ba41486fc2bc88ef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:13:58.807128 1670693 cache.go:115] /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0914 01:13:58.807173 1670693 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 283.248µs
	I0914 01:13:58.807187 1670693 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0914 01:13:58.806916 1670693 cache.go:107] acquiring lock: {Name:mk43acdb68020907da9624c6600888eb0ac8fa76 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:13:58.807215 1670693 cache.go:115] /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0914 01:13:58.807245 1670693 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 329.984µs
	I0914 01:13:58.807257 1670693 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0914 01:13:58.806933 1670693 cache.go:107] acquiring lock: {Name:mk5c03fd5b9a7b68eb04da1bd781b0643bfa1a0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:13:58.807286 1670693 cache.go:115] /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0914 01:13:58.807315 1670693 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 382.686µs
	I0914 01:13:58.807325 1670693 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0914 01:13:58.806949 1670693 cache.go:107] acquiring lock: {Name:mke1db2947e779b7709b5f82d6eb8271dccebac1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:13:58.807002 1670693 cache.go:107] acquiring lock: {Name:mkdcd5a8ae3ca9579d56d6c2d2d864a47e644257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:13:58.807400 1670693 cache.go:115] /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0914 01:13:58.807411 1670693 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 444.963µs
	I0914 01:13:58.807422 1670693 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0914 01:13:58.807457 1670693 cache.go:115] /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0914 01:13:58.807483 1670693 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 533.561µs
	I0914 01:13:58.807513 1670693 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0914 01:13:58.807542 1670693 cache.go:87] Successfully saved all images to host disk.
	W0914 01:13:58.825612 1670693 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 is of wrong architecture
	I0914 01:13:58.825635 1670693 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 01:13:58.825728 1670693 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 01:13:58.825749 1670693 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0914 01:13:58.825754 1670693 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0914 01:13:58.825763 1670693 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0914 01:13:58.825773 1670693 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from local cache
	I0914 01:13:58.999213 1670693 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from cached tarball
	I0914 01:13:58.999251 1670693 cache.go:194] Successfully downloaded all kic artifacts
	I0914 01:13:58.999282 1670693 start.go:360] acquireMachinesLock for no-preload-772888: {Name:mk1630908af29a1bb882fbbba4a221fd8cd6bde6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 01:13:58.999355 1670693 start.go:364] duration metric: took 56.131µs to acquireMachinesLock for "no-preload-772888"
	I0914 01:13:58.999383 1670693 start.go:96] Skipping create...Using existing machine configuration
	I0914 01:13:58.999392 1670693 fix.go:54] fixHost starting: 
	I0914 01:13:58.999680 1670693 cli_runner.go:164] Run: docker container inspect no-preload-772888 --format={{.State.Status}}
	I0914 01:13:59.027886 1670693 fix.go:112] recreateIfNeeded on no-preload-772888: state=Stopped err=<nil>
	W0914 01:13:59.027956 1670693 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 01:13:59.031280 1670693 out.go:177] * Restarting existing docker container for "no-preload-772888" ...
	I0914 01:13:59.033360 1670693 cli_runner.go:164] Run: docker start no-preload-772888
	I0914 01:13:59.373480 1670693 cli_runner.go:164] Run: docker container inspect no-preload-772888 --format={{.State.Status}}
	I0914 01:13:59.395113 1670693 kic.go:430] container "no-preload-772888" state is running.
	I0914 01:13:59.395506 1670693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-772888
	I0914 01:13:59.418950 1670693 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/config.json ...
	I0914 01:13:59.419185 1670693 machine.go:93] provisionDockerMachine start ...
	I0914 01:13:59.419242 1670693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-772888
	I0914 01:13:59.447829 1670693 main.go:141] libmachine: Using SSH client type: native
	I0914 01:13:59.448167 1670693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34924 <nil> <nil>}
	I0914 01:13:59.448178 1670693 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 01:13:59.448771 1670693 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50962->127.0.0.1:34924: read: connection reset by peer
	I0914 01:14:02.571662 1670693 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-772888
	
	I0914 01:14:02.571702 1670693 ubuntu.go:169] provisioning hostname "no-preload-772888"
	I0914 01:14:02.571783 1670693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-772888
	I0914 01:14:02.590521 1670693 main.go:141] libmachine: Using SSH client type: native
	I0914 01:14:02.590775 1670693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34924 <nil> <nil>}
	I0914 01:14:02.590793 1670693 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-772888 && echo "no-preload-772888" | sudo tee /etc/hostname
	I0914 01:14:02.724985 1670693 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-772888
	
	I0914 01:14:02.725113 1670693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-772888
	I0914 01:14:02.743497 1670693 main.go:141] libmachine: Using SSH client type: native
	I0914 01:14:02.743734 1670693 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 34924 <nil> <nil>}
	I0914 01:14:02.743756 1670693 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-772888' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-772888/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-772888' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 01:14:02.872038 1670693 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 01:14:02.872065 1670693 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19640-1454467/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-1454467/.minikube}
	I0914 01:14:02.872101 1670693 ubuntu.go:177] setting up certificates
	I0914 01:14:02.872111 1670693 provision.go:84] configureAuth start
	I0914 01:14:02.872173 1670693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-772888
	I0914 01:14:02.890154 1670693 provision.go:143] copyHostCerts
	I0914 01:14:02.890269 1670693 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-1454467/.minikube/key.pem, removing ...
	I0914 01:14:02.890341 1670693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-1454467/.minikube/key.pem
	I0914 01:14:02.890458 1670693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-1454467/.minikube/key.pem (1679 bytes)
	I0914 01:14:02.890592 1670693 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.pem, removing ...
	I0914 01:14:02.890609 1670693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.pem
	I0914 01:14:02.890643 1670693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.pem (1078 bytes)
	I0914 01:14:02.890714 1670693 exec_runner.go:144] found /home/jenkins/minikube-integration/19640-1454467/.minikube/cert.pem, removing ...
	I0914 01:14:02.890725 1670693 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19640-1454467/.minikube/cert.pem
	I0914 01:14:02.890753 1670693 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-1454467/.minikube/cert.pem (1123 bytes)
	I0914 01:14:02.890820 1670693 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca-key.pem org=jenkins.no-preload-772888 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-772888]
	I0914 01:14:03.170653 1670693 provision.go:177] copyRemoteCerts
	I0914 01:14:03.170735 1670693 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 01:14:03.170789 1670693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-772888
	I0914 01:14:03.188704 1670693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34924 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/no-preload-772888/id_rsa Username:docker}
	I0914 01:14:03.277445 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0914 01:14:03.304661 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0914 01:14:03.329885 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 01:14:03.356047 1670693 provision.go:87] duration metric: took 483.911601ms to configureAuth
	I0914 01:14:03.356076 1670693 ubuntu.go:193] setting minikube options for container-runtime
	I0914 01:14:03.356281 1670693 config.go:182] Loaded profile config "no-preload-772888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 01:14:03.356293 1670693 machine.go:96] duration metric: took 3.93710099s to provisionDockerMachine
	I0914 01:14:03.356303 1670693 start.go:293] postStartSetup for "no-preload-772888" (driver="docker")
	I0914 01:14:03.356314 1670693 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 01:14:03.356365 1670693 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 01:14:03.356411 1670693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-772888
	I0914 01:14:03.373751 1670693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34924 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/no-preload-772888/id_rsa Username:docker}
	I0914 01:14:03.465738 1670693 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 01:14:03.469030 1670693 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 01:14:03.469068 1670693 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 01:14:03.469104 1670693 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 01:14:03.469119 1670693 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0914 01:14:03.469130 1670693 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-1454467/.minikube/addons for local assets ...
	I0914 01:14:03.469202 1670693 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-1454467/.minikube/files for local assets ...
	I0914 01:14:03.469283 1670693 filesync.go:149] local asset: /home/jenkins/minikube-integration/19640-1454467/.minikube/files/etc/ssl/certs/14598482.pem -> 14598482.pem in /etc/ssl/certs
	I0914 01:14:03.469390 1670693 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 01:14:03.477942 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/files/etc/ssl/certs/14598482.pem --> /etc/ssl/certs/14598482.pem (1708 bytes)
	I0914 01:14:03.502816 1670693 start.go:296] duration metric: took 146.496915ms for postStartSetup
	I0914 01:14:03.502917 1670693 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 01:14:03.502965 1670693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-772888
	I0914 01:14:03.519670 1670693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34924 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/no-preload-772888/id_rsa Username:docker}
	I0914 01:14:03.605153 1670693 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 01:14:03.609715 1670693 fix.go:56] duration metric: took 4.610315034s for fixHost
	I0914 01:14:03.609781 1670693 start.go:83] releasing machines lock for "no-preload-772888", held for 4.610410845s
	I0914 01:14:03.609870 1670693 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-772888
	I0914 01:13:59.426858 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:01.925317 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:03.927601 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:03.626866 1670693 ssh_runner.go:195] Run: cat /version.json
	I0914 01:14:03.626931 1670693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-772888
	I0914 01:14:03.627210 1670693 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 01:14:03.627289 1670693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-772888
	I0914 01:14:03.648553 1670693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34924 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/no-preload-772888/id_rsa Username:docker}
	I0914 01:14:03.657630 1670693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34924 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/no-preload-772888/id_rsa Username:docker}
	I0914 01:14:03.739280 1670693 ssh_runner.go:195] Run: systemctl --version
	I0914 01:14:03.890857 1670693 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 01:14:03.895395 1670693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0914 01:14:03.914053 1670693 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0914 01:14:03.914152 1670693 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 01:14:03.925966 1670693 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 01:14:03.925995 1670693 start.go:495] detecting cgroup driver to use...
	I0914 01:14:03.926026 1670693 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 01:14:03.926075 1670693 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 01:14:03.941217 1670693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 01:14:03.953457 1670693 docker.go:217] disabling cri-docker service (if available) ...
	I0914 01:14:03.953541 1670693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 01:14:03.966965 1670693 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 01:14:03.979212 1670693 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 01:14:04.074950 1670693 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 01:14:04.173737 1670693 docker.go:233] disabling docker service ...
	I0914 01:14:04.173854 1670693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 01:14:04.187342 1670693 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 01:14:04.200823 1670693 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 01:14:04.302115 1670693 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 01:14:04.425976 1670693 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 01:14:04.439312 1670693 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 01:14:04.457567 1670693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0914 01:14:04.469237 1670693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 01:14:04.480294 1670693 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 01:14:04.480367 1670693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 01:14:04.491417 1670693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 01:14:04.501637 1670693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 01:14:04.511731 1670693 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 01:14:04.522614 1670693 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 01:14:04.532545 1670693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 01:14:04.542634 1670693 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 01:14:04.552384 1670693 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 01:14:04.562687 1670693 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 01:14:04.572181 1670693 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 01:14:04.580763 1670693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:14:04.675453 1670693 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 01:14:04.829562 1670693 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0914 01:14:04.829674 1670693 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0914 01:14:04.834430 1670693 start.go:563] Will wait 60s for crictl version
	I0914 01:14:04.834521 1670693 ssh_runner.go:195] Run: which crictl
	I0914 01:14:04.838365 1670693 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 01:14:04.884147 1670693 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0914 01:14:04.884220 1670693 ssh_runner.go:195] Run: containerd --version
	I0914 01:14:04.909237 1670693 ssh_runner.go:195] Run: containerd --version
	I0914 01:14:04.945991 1670693 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0914 01:14:04.947794 1670693 cli_runner.go:164] Run: docker network inspect no-preload-772888 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 01:14:04.964711 1670693 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0914 01:14:04.968375 1670693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:14:04.979237 1670693 kubeadm.go:883] updating cluster {Name:no-preload-772888 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-772888 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 01:14:04.979359 1670693 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 01:14:04.979404 1670693 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 01:14:05.020165 1670693 containerd.go:627] all images are preloaded for containerd runtime.
	I0914 01:14:05.020194 1670693 cache_images.go:84] Images are preloaded, skipping loading
	I0914 01:14:05.020203 1670693 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I0914 01:14:05.020316 1670693 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-772888 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-772888 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 01:14:05.020396 1670693 ssh_runner.go:195] Run: sudo crictl info
	I0914 01:14:05.072702 1670693 cni.go:84] Creating CNI manager for ""
	I0914 01:14:05.072730 1670693 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 01:14:05.072740 1670693 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 01:14:05.072762 1670693 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-772888 NodeName:no-preload-772888 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 01:14:05.072901 1670693 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-772888"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 01:14:05.072975 1670693 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 01:14:05.083272 1670693 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 01:14:05.083374 1670693 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 01:14:05.093816 1670693 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0914 01:14:05.114063 1670693 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 01:14:05.140234 1670693 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0914 01:14:05.161107 1670693 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0914 01:14:05.164971 1670693 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 01:14:05.176634 1670693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:14:05.272794 1670693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:14:05.290181 1670693 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888 for IP: 192.168.85.2
	I0914 01:14:05.290205 1670693 certs.go:194] generating shared ca certs ...
	I0914 01:14:05.290223 1670693 certs.go:226] acquiring lock for ca certs: {Name:mkfaf13a8785cc44d16a85b8163136271bcd698b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:14:05.290401 1670693 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.key
	I0914 01:14:05.290455 1670693 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.key
	I0914 01:14:05.290476 1670693 certs.go:256] generating profile certs ...
	I0914 01:14:05.290580 1670693 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.key
	I0914 01:14:05.290658 1670693 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/apiserver.key.3a7af6c0
	I0914 01:14:05.290716 1670693 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/proxy-client.key
	I0914 01:14:05.290854 1670693 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/1459848.pem (1338 bytes)
	W0914 01:14:05.290886 1670693 certs.go:480] ignoring /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/1459848_empty.pem, impossibly tiny 0 bytes
	I0914 01:14:05.290898 1670693 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca-key.pem (1675 bytes)
	I0914 01:14:05.290938 1670693 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/ca.pem (1078 bytes)
	I0914 01:14:05.290966 1670693 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/cert.pem (1123 bytes)
	I0914 01:14:05.290990 1670693 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/key.pem (1679 bytes)
	I0914 01:14:05.291048 1670693 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-1454467/.minikube/files/etc/ssl/certs/14598482.pem (1708 bytes)
	I0914 01:14:05.291812 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 01:14:05.322942 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0914 01:14:05.351517 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 01:14:05.394791 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0914 01:14:05.421843 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0914 01:14:05.467555 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 01:14:05.512636 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 01:14:05.558302 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 01:14:05.585867 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/files/etc/ssl/certs/14598482.pem --> /usr/share/ca-certificates/14598482.pem (1708 bytes)
	I0914 01:14:05.618169 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 01:14:05.652224 1670693 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-1454467/.minikube/certs/1459848.pem --> /usr/share/ca-certificates/1459848.pem (1338 bytes)
	I0914 01:14:05.677827 1670693 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 01:14:05.697213 1670693 ssh_runner.go:195] Run: openssl version
	I0914 01:14:05.702968 1670693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/14598482.pem && ln -fs /usr/share/ca-certificates/14598482.pem /etc/ssl/certs/14598482.pem"
	I0914 01:14:05.713301 1670693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/14598482.pem
	I0914 01:14:05.716950 1670693 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 00:32 /usr/share/ca-certificates/14598482.pem
	I0914 01:14:05.717044 1670693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/14598482.pem
	I0914 01:14:05.724476 1670693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/14598482.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 01:14:05.734131 1670693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 01:14:05.744775 1670693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:14:05.748261 1670693 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 00:22 /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:14:05.748384 1670693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 01:14:05.755788 1670693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 01:14:05.765275 1670693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1459848.pem && ln -fs /usr/share/ca-certificates/1459848.pem /etc/ssl/certs/1459848.pem"
	I0914 01:14:05.777098 1670693 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1459848.pem
	I0914 01:14:05.780928 1670693 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 00:32 /usr/share/ca-certificates/1459848.pem
	I0914 01:14:05.780994 1670693 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1459848.pem
	I0914 01:14:05.787945 1670693 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1459848.pem /etc/ssl/certs/51391683.0"
	I0914 01:14:05.798501 1670693 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 01:14:05.802360 1670693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 01:14:05.809547 1670693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 01:14:05.816793 1670693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 01:14:05.824082 1670693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 01:14:05.831378 1670693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 01:14:05.838400 1670693 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 01:14:05.845542 1670693 kubeadm.go:392] StartCluster: {Name:no-preload-772888 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-772888 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 01:14:05.845650 1670693 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0914 01:14:05.845713 1670693 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 01:14:05.891186 1670693 cri.go:89] found id: "ecb7e88b76ff3962e6ba3ab42b45615094166be64eb79b00b1032a39a2fd4c3d"
	I0914 01:14:05.891258 1670693 cri.go:89] found id: "93f00db6881b6556beeb6ff3b1b3e1c2c38fa795b02ff956f8c98fdc6aa7a439"
	I0914 01:14:05.891284 1670693 cri.go:89] found id: "594c5be6d024555a9c73577109a0f0c05605762c71ac075d4c573fe56dfac2d2"
	I0914 01:14:05.891308 1670693 cri.go:89] found id: "6b78941416e5b9610c4fb3536ad5769b2e0d693a7e0546deb89f91cf2b2cd419"
	I0914 01:14:05.891342 1670693 cri.go:89] found id: "f9902939b1e5c6245f07d2e7e54447313524fe65c2185d1212091cbb9cd5cb50"
	I0914 01:14:05.891366 1670693 cri.go:89] found id: "a8de7903a1d33219ef2f5dfc1b91e70f427872d12ce10d7b791f5e21db875f1f"
	I0914 01:14:05.891386 1670693 cri.go:89] found id: "3e116d604802bad9269a3fb52e4bd051e2b70297bf40922dd7d835ab0b777300"
	I0914 01:14:05.891420 1670693 cri.go:89] found id: "f0c9ef6d8832f2de18749478615eedb7cb3f37cf7b9e391af72a925e10f69825"
	I0914 01:14:05.891441 1670693 cri.go:89] found id: ""
	I0914 01:14:05.891529 1670693 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0914 01:14:05.905580 1670693 cri.go:116] JSON = null
	W0914 01:14:05.905628 1670693 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0914 01:14:05.905699 1670693 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 01:14:05.916043 1670693 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 01:14:05.916066 1670693 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 01:14:05.916137 1670693 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 01:14:05.930870 1670693 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 01:14:05.931592 1670693 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-772888" does not appear in /home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 01:14:05.932025 1670693 kubeconfig.go:62] /home/jenkins/minikube-integration/19640-1454467/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-772888" cluster setting kubeconfig missing "no-preload-772888" context setting]
	I0914 01:14:05.932965 1670693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/kubeconfig: {Name:mk9726361d7deb93fbb6dba7857cc3f0a8a02233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:14:05.934804 1670693 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 01:14:05.953504 1670693 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0914 01:14:05.953594 1670693 kubeadm.go:597] duration metric: took 37.519783ms to restartPrimaryControlPlane
	I0914 01:14:05.953624 1670693 kubeadm.go:394] duration metric: took 108.091439ms to StartCluster
	I0914 01:14:05.953679 1670693 settings.go:142] acquiring lock: {Name:mk71d0962f5f4196c9fea75fe9a601467858166a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:14:05.953809 1670693 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 01:14:05.955042 1670693 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/kubeconfig: {Name:mk9726361d7deb93fbb6dba7857cc3f0a8a02233 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 01:14:05.956073 1670693 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 01:14:05.956295 1670693 config.go:182] Loaded profile config "no-preload-772888": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 01:14:05.956362 1670693 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 01:14:05.956437 1670693 addons.go:69] Setting storage-provisioner=true in profile "no-preload-772888"
	I0914 01:14:05.956475 1670693 addons.go:234] Setting addon storage-provisioner=true in "no-preload-772888"
	W0914 01:14:05.956488 1670693 addons.go:243] addon storage-provisioner should already be in state true
	I0914 01:14:05.956512 1670693 host.go:66] Checking if "no-preload-772888" exists ...
	I0914 01:14:05.956569 1670693 addons.go:69] Setting default-storageclass=true in profile "no-preload-772888"
	I0914 01:14:05.956595 1670693 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-772888"
	I0914 01:14:05.956916 1670693 cli_runner.go:164] Run: docker container inspect no-preload-772888 --format={{.State.Status}}
	I0914 01:14:05.956970 1670693 cli_runner.go:164] Run: docker container inspect no-preload-772888 --format={{.State.Status}}
	I0914 01:14:05.957330 1670693 addons.go:69] Setting dashboard=true in profile "no-preload-772888"
	I0914 01:14:05.957380 1670693 addons.go:234] Setting addon dashboard=true in "no-preload-772888"
	W0914 01:14:05.957410 1670693 addons.go:243] addon dashboard should already be in state true
	I0914 01:14:05.957496 1670693 host.go:66] Checking if "no-preload-772888" exists ...
	I0914 01:14:05.958202 1670693 cli_runner.go:164] Run: docker container inspect no-preload-772888 --format={{.State.Status}}
	I0914 01:14:05.958743 1670693 addons.go:69] Setting metrics-server=true in profile "no-preload-772888"
	I0914 01:14:05.958771 1670693 addons.go:234] Setting addon metrics-server=true in "no-preload-772888"
	W0914 01:14:05.958792 1670693 addons.go:243] addon metrics-server should already be in state true
	I0914 01:14:05.958817 1670693 host.go:66] Checking if "no-preload-772888" exists ...
	I0914 01:14:05.959404 1670693 cli_runner.go:164] Run: docker container inspect no-preload-772888 --format={{.State.Status}}
	I0914 01:14:05.960550 1670693 out.go:177] * Verifying Kubernetes components...
	I0914 01:14:05.964573 1670693 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 01:14:06.017328 1670693 addons.go:234] Setting addon default-storageclass=true in "no-preload-772888"
	W0914 01:14:06.017359 1670693 addons.go:243] addon default-storageclass should already be in state true
	I0914 01:14:06.017387 1670693 host.go:66] Checking if "no-preload-772888" exists ...
	I0914 01:14:06.017826 1670693 cli_runner.go:164] Run: docker container inspect no-preload-772888 --format={{.State.Status}}
	I0914 01:14:06.041185 1670693 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 01:14:06.041357 1670693 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 01:14:06.043583 1670693 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:14:06.043609 1670693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 01:14:06.043688 1670693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-772888
	I0914 01:14:06.044488 1670693 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 01:14:06.044506 1670693 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 01:14:06.044578 1670693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-772888
	I0914 01:14:06.071505 1670693 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0914 01:14:06.073374 1670693 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0914 01:14:06.075325 1670693 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0914 01:14:06.075351 1670693 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0914 01:14:06.075429 1670693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-772888
	I0914 01:14:06.081647 1670693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34924 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/no-preload-772888/id_rsa Username:docker}
	I0914 01:14:06.109622 1670693 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 01:14:06.109643 1670693 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 01:14:06.109704 1670693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-772888
	I0914 01:14:06.124138 1670693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34924 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/no-preload-772888/id_rsa Username:docker}
	I0914 01:14:06.144077 1670693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34924 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/no-preload-772888/id_rsa Username:docker}
	I0914 01:14:06.172108 1670693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34924 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/no-preload-772888/id_rsa Username:docker}
	I0914 01:14:06.207933 1670693 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 01:14:06.268129 1670693 node_ready.go:35] waiting up to 6m0s for node "no-preload-772888" to be "Ready" ...
	I0914 01:14:06.425784 1670693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:14:06.454302 1670693 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0914 01:14:06.454328 1670693 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0914 01:14:06.508913 1670693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 01:14:06.508944 1670693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 01:14:06.518595 1670693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:14:06.579327 1670693 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0914 01:14:06.579360 1670693 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0914 01:14:06.652421 1670693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 01:14:06.652466 1670693 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 01:14:06.653674 1670693 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0914 01:14:06.653697 1670693 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0914 01:14:06.836943 1670693 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0914 01:14:06.837022 1670693 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0914 01:14:06.867820 1670693 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:14:06.867904 1670693 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0914 01:14:07.001520 1670693 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0914 01:14:07.001627 1670693 retry.go:31] will retry after 211.43395ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0914 01:14:07.016604 1670693 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0914 01:14:07.016680 1670693 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0914 01:14:07.085833 1670693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 01:14:07.151893 1670693 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0914 01:14:07.151979 1670693 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0914 01:14:07.168962 1670693 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0914 01:14:07.169043 1670693 retry.go:31] will retry after 125.133561ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0914 01:14:07.214089 1670693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 01:14:07.295376 1670693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0914 01:14:07.314538 1670693 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0914 01:14:07.314611 1670693 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0914 01:14:07.482405 1670693 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0914 01:14:07.482481 1670693 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0914 01:14:07.588992 1670693 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0914 01:14:07.589065 1670693 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0914 01:14:07.710660 1670693 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0914 01:14:05.928872 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:08.426515 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:11.418106 1670693 node_ready.go:49] node "no-preload-772888" has status "Ready":"True"
	I0914 01:14:11.418128 1670693 node_ready.go:38] duration metric: took 5.149958531s for node "no-preload-772888" to be "Ready" ...
	I0914 01:14:11.418139 1670693 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:14:11.458375 1670693 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-26ftm" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:11.486212 1670693 pod_ready.go:93] pod "coredns-7c65d6cfc9-26ftm" in "kube-system" namespace has status "Ready":"True"
	I0914 01:14:11.486278 1670693 pod_ready.go:82] duration metric: took 27.830638ms for pod "coredns-7c65d6cfc9-26ftm" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:11.486312 1670693 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-772888" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:11.538114 1670693 pod_ready.go:93] pod "etcd-no-preload-772888" in "kube-system" namespace has status "Ready":"True"
	I0914 01:14:11.538189 1670693 pod_ready.go:82] duration metric: took 51.856224ms for pod "etcd-no-preload-772888" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:11.538219 1670693 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-772888" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:11.574565 1670693 pod_ready.go:93] pod "kube-apiserver-no-preload-772888" in "kube-system" namespace has status "Ready":"True"
	I0914 01:14:11.574641 1670693 pod_ready.go:82] duration metric: took 36.399933ms for pod "kube-apiserver-no-preload-772888" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:11.574667 1670693 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-772888" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:11.612893 1670693 pod_ready.go:93] pod "kube-controller-manager-no-preload-772888" in "kube-system" namespace has status "Ready":"True"
	I0914 01:14:11.612970 1670693 pod_ready.go:82] duration metric: took 38.280576ms for pod "kube-controller-manager-no-preload-772888" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:11.612999 1670693 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-m4pjm" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:11.644054 1670693 pod_ready.go:93] pod "kube-proxy-m4pjm" in "kube-system" namespace has status "Ready":"True"
	I0914 01:14:11.644076 1670693 pod_ready.go:82] duration metric: took 31.055515ms for pod "kube-proxy-m4pjm" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:11.644087 1670693 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-772888" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:10.926009 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:12.934519 1662972 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:13.651898 1670693 pod_ready.go:103] pod "kube-scheduler-no-preload-772888" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:14.593252 1670693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.507330371s)
	I0914 01:14:14.593287 1670693 addons.go:475] Verifying addon metrics-server=true in "no-preload-772888"
	I0914 01:14:14.742940 1670693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.528754003s)
	I0914 01:14:14.742997 1670693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.447550132s)
	I0914 01:14:14.832571 1670693 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.121821558s)
	I0914 01:14:14.836054 1670693 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-772888 addons enable metrics-server
	
	I0914 01:14:14.838510 1670693 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I0914 01:14:14.840716 1670693 addons.go:510] duration metric: took 8.884346037s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I0914 01:14:15.654645 1670693 pod_ready.go:103] pod "kube-scheduler-no-preload-772888" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:18.151332 1670693 pod_ready.go:103] pod "kube-scheduler-no-preload-772888" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:14.504954 1662972 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"True"
	I0914 01:14:14.504981 1662972 pod_ready.go:82] duration metric: took 1m30.086034773s for pod "kube-controller-manager-old-k8s-version-610182" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:14.504994 1662972 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vmn48" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:14.511409 1662972 pod_ready.go:93] pod "kube-proxy-vmn48" in "kube-system" namespace has status "Ready":"True"
	I0914 01:14:14.511438 1662972 pod_ready.go:82] duration metric: took 6.436329ms for pod "kube-proxy-vmn48" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:14.511450 1662972 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-610182" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:14.523506 1662972 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-610182" in "kube-system" namespace has status "Ready":"True"
	I0914 01:14:14.523532 1662972 pod_ready.go:82] duration metric: took 12.073745ms for pod "kube-scheduler-old-k8s-version-610182" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:14.523546 1662972 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:16.531960 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:20.652194 1670693 pod_ready.go:103] pod "kube-scheduler-no-preload-772888" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:23.150986 1670693 pod_ready.go:103] pod "kube-scheduler-no-preload-772888" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:19.030743 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:21.034860 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:23.530917 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:25.650386 1670693 pod_ready.go:103] pod "kube-scheduler-no-preload-772888" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:26.151228 1670693 pod_ready.go:93] pod "kube-scheduler-no-preload-772888" in "kube-system" namespace has status "Ready":"True"
	I0914 01:14:26.151255 1670693 pod_ready.go:82] duration metric: took 14.507159301s for pod "kube-scheduler-no-preload-772888" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:26.151268 1670693 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace to be "Ready" ...
	I0914 01:14:28.157882 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:26.031465 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:28.032205 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:30.158836 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:32.657982 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:30.036949 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:32.530684 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:34.659405 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:37.157770 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:35.032406 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:37.529934 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:39.657123 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:42.162739 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:40.035056 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:42.529748 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:44.658034 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:46.659513 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:44.530577 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:47.030176 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:49.156737 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:51.157337 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:53.158301 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:49.030855 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:51.529721 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:53.530474 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:55.158909 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:57.657289 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:56.030663 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:14:58.031115 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:00.170358 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:02.657789 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:00.034900 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:02.530721 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:05.157565 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:07.657163 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:05.030123 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:07.030239 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:09.657322 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:11.657362 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:09.030830 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:11.530356 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:13.657465 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:15.657979 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:18.160036 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:14.029924 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:16.031991 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:18.530052 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:20.657558 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:22.660560 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:20.530291 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:23.030657 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:25.158768 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:27.657235 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:25.130347 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:27.530650 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:29.657626 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:31.658434 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:30.065372 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:32.531328 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:34.157745 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:36.158375 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:35.030938 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:37.031230 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:38.657124 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:40.658005 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:43.157440 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:39.031679 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:41.530777 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:43.530952 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:45.176902 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:47.657440 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:46.030225 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:48.030814 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:49.657735 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:51.657887 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:50.530115 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:52.530309 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:53.658506 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:56.158368 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:55.031142 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:57.529714 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:58.657439 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:00.658547 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:03.157427 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:15:59.531999 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:02.031409 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:05.157887 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:07.157995 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:04.530798 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:06.530939 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:09.657417 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:11.657880 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:09.030593 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:11.031171 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:13.532673 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:13.658375 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:16.157537 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:15.568258 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:18.030448 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:18.657579 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:21.157347 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:23.157631 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:20.033651 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:22.531036 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:25.158353 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:27.657695 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:25.030886 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:27.530036 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:30.158523 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:32.659667 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:29.530901 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:32.030132 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:35.157368 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:37.656921 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:34.033335 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:36.529970 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:39.657383 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:41.658447 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:39.030797 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:41.529716 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:43.529988 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:44.157151 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:46.157968 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:46.030692 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:48.031278 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:48.657982 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:51.157611 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:53.157846 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:50.529959 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:53.030950 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:55.656759 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:57.657254 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:55.031388 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:16:57.530385 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:00.190838 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:02.658737 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:00.058274 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:02.530500 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:05.158286 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:07.658156 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:05.034995 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:07.530822 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:10.158557 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:12.657467 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:10.032304 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:12.529655 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:15.158617 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:17.662655 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:14.530193 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:16.530600 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:18.530680 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:20.158090 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:22.158606 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:21.030349 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:23.530076 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:24.657275 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:26.658049 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:25.531418 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:28.030860 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:29.157479 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:31.657732 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:30.033042 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:32.530703 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:34.157575 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:36.158164 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:35.031474 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:37.032071 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:38.657547 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:41.157728 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:39.529681 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:41.530452 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:43.530637 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:43.657524 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:45.657616 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:48.157470 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:46.030780 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:48.531142 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:50.159552 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:52.657768 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:51.031064 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:53.031651 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:55.159131 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:57.657545 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:55.529303 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:57.530301 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:00.176580 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:02.658580 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:17:59.536331 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:02.030690 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:05.157131 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:07.162452 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:04.530187 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:06.618640 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:09.657821 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:12.157862 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:09.030888 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:11.030940 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:13.529684 1662972 pod_ready.go:103] pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:14.657447 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:16.665793 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:14.530105 1662972 pod_ready.go:82] duration metric: took 4m0.006544472s for pod "metrics-server-9975d5f86-ncmqs" in "kube-system" namespace to be "Ready" ...
	E0914 01:18:14.530129 1662972 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 01:18:14.530138 1662972 pod_ready.go:39] duration metric: took 5m30.721102029s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 01:18:14.530152 1662972 api_server.go:52] waiting for apiserver process to appear ...
	I0914 01:18:14.530192 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0914 01:18:14.530257 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 01:18:14.570490 1662972 cri.go:89] found id: "a8213321a49b65c4449a49fd8155be47c3e9743c8f95a001adb67b9fbfaa7501"
	I0914 01:18:14.570513 1662972 cri.go:89] found id: "c745816624dd373ada13c555722ba230b7c9e389e3b6d0e4f549f5f67748e6bb"
	I0914 01:18:14.570518 1662972 cri.go:89] found id: ""
	I0914 01:18:14.570526 1662972 logs.go:276] 2 containers: [a8213321a49b65c4449a49fd8155be47c3e9743c8f95a001adb67b9fbfaa7501 c745816624dd373ada13c555722ba230b7c9e389e3b6d0e4f549f5f67748e6bb]
	I0914 01:18:14.570582 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.574374 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.577818 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0914 01:18:14.577885 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 01:18:14.620151 1662972 cri.go:89] found id: "f1adefb5809402611e02118e987a812f2bd5acdc1959872d264a2266f122241d"
	I0914 01:18:14.620172 1662972 cri.go:89] found id: "470db368691dc688bd73d31b5956b04b5f9dbefd4381609d2abb94380494773e"
	I0914 01:18:14.620177 1662972 cri.go:89] found id: ""
	I0914 01:18:14.620184 1662972 logs.go:276] 2 containers: [f1adefb5809402611e02118e987a812f2bd5acdc1959872d264a2266f122241d 470db368691dc688bd73d31b5956b04b5f9dbefd4381609d2abb94380494773e]
	I0914 01:18:14.620247 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.623767 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.627945 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0914 01:18:14.628016 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 01:18:14.671644 1662972 cri.go:89] found id: "031b10e8b5319fea998363415b3b511d9dbdb6b5dcf822b26400d2b3681bb5fe"
	I0914 01:18:14.671715 1662972 cri.go:89] found id: "1ccdca51423fa99d2e530031643e1a3f8affe650ed219723b383e28a2eb94bc1"
	I0914 01:18:14.671735 1662972 cri.go:89] found id: ""
	I0914 01:18:14.671760 1662972 logs.go:276] 2 containers: [031b10e8b5319fea998363415b3b511d9dbdb6b5dcf822b26400d2b3681bb5fe 1ccdca51423fa99d2e530031643e1a3f8affe650ed219723b383e28a2eb94bc1]
	I0914 01:18:14.671903 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.675496 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.678883 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0914 01:18:14.679001 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 01:18:14.718833 1662972 cri.go:89] found id: "4c2d74f70880659a29267038a079647917ca2d99bfc511f1bcea43c7917c095d"
	I0914 01:18:14.718855 1662972 cri.go:89] found id: "9ee463a5994bbf1596e364b41c96e0062bea0e46e569fa942d3f748c34fcac51"
	I0914 01:18:14.718860 1662972 cri.go:89] found id: ""
	I0914 01:18:14.718868 1662972 logs.go:276] 2 containers: [4c2d74f70880659a29267038a079647917ca2d99bfc511f1bcea43c7917c095d 9ee463a5994bbf1596e364b41c96e0062bea0e46e569fa942d3f748c34fcac51]
	I0914 01:18:14.718921 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.722682 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.726402 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0914 01:18:14.726482 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 01:18:14.780039 1662972 cri.go:89] found id: "269d5b982d5d4a022dc3577be6403979f7298b0b3fc813ef12e6907953b41c43"
	I0914 01:18:14.780064 1662972 cri.go:89] found id: "d4a517b2228e77499d889671df50a45c4521d51fc2cd9e574d45970b0a7b5158"
	I0914 01:18:14.780069 1662972 cri.go:89] found id: ""
	I0914 01:18:14.780076 1662972 logs.go:276] 2 containers: [269d5b982d5d4a022dc3577be6403979f7298b0b3fc813ef12e6907953b41c43 d4a517b2228e77499d889671df50a45c4521d51fc2cd9e574d45970b0a7b5158]
	I0914 01:18:14.780140 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.783631 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.788015 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 01:18:14.788084 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 01:18:14.833303 1662972 cri.go:89] found id: "61e7b1ed81983f8feb0c4985df8852de5a355a343c2f2bac727eddd38d326e49"
	I0914 01:18:14.833329 1662972 cri.go:89] found id: "9ca7286663fbb643571bb09446c018e26421cf79f88726964da8abb585942833"
	I0914 01:18:14.833334 1662972 cri.go:89] found id: ""
	I0914 01:18:14.833341 1662972 logs.go:276] 2 containers: [61e7b1ed81983f8feb0c4985df8852de5a355a343c2f2bac727eddd38d326e49 9ca7286663fbb643571bb09446c018e26421cf79f88726964da8abb585942833]
	I0914 01:18:14.833559 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.844709 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.848783 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0914 01:18:14.848885 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 01:18:14.898099 1662972 cri.go:89] found id: "0baa6680fbdc78d3ce9ed5a376321b45078166580f6d09f833f0a479f6555f1f"
	I0914 01:18:14.898132 1662972 cri.go:89] found id: "90ac534cf83568c45dec31050411849fe6c5da6ad5850b89788bc760ebd183bc"
	I0914 01:18:14.898137 1662972 cri.go:89] found id: ""
	I0914 01:18:14.898145 1662972 logs.go:276] 2 containers: [0baa6680fbdc78d3ce9ed5a376321b45078166580f6d09f833f0a479f6555f1f 90ac534cf83568c45dec31050411849fe6c5da6ad5850b89788bc760ebd183bc]
	I0914 01:18:14.898216 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.901985 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.905346 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0914 01:18:14.905480 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 01:18:14.955707 1662972 cri.go:89] found id: "0ee7f8c793d3af17f4104c267467b65d90ef7bc7c809c0c787cfa6261c9a806b"
	I0914 01:18:14.955732 1662972 cri.go:89] found id: "a87d77c89dfdfee2cffaf479b6efc6afa9d07d45268f21126c7d19ec57c7bf8c"
	I0914 01:18:14.955737 1662972 cri.go:89] found id: ""
	I0914 01:18:14.955745 1662972 logs.go:276] 2 containers: [0ee7f8c793d3af17f4104c267467b65d90ef7bc7c809c0c787cfa6261c9a806b a87d77c89dfdfee2cffaf479b6efc6afa9d07d45268f21126c7d19ec57c7bf8c]
	I0914 01:18:14.955836 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.959640 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:14.963256 1662972 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 01:18:14.963421 1662972 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 01:18:15.011451 1662972 cri.go:89] found id: "0fffc84db99e3cdc18ff01574902ffd49a1b6d96ad7fc3649f3b141734861d74"
	I0914 01:18:15.011474 1662972 cri.go:89] found id: ""
	I0914 01:18:15.011482 1662972 logs.go:276] 1 containers: [0fffc84db99e3cdc18ff01574902ffd49a1b6d96ad7fc3649f3b141734861d74]
	I0914 01:18:15.011551 1662972 ssh_runner.go:195] Run: which crictl
	I0914 01:18:15.017064 1662972 logs.go:123] Gathering logs for kindnet [0baa6680fbdc78d3ce9ed5a376321b45078166580f6d09f833f0a479f6555f1f] ...
	I0914 01:18:15.017089 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0baa6680fbdc78d3ce9ed5a376321b45078166580f6d09f833f0a479f6555f1f"
	I0914 01:18:15.100853 1662972 logs.go:123] Gathering logs for container status ...
	I0914 01:18:15.100896 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 01:18:15.167001 1662972 logs.go:123] Gathering logs for kubelet ...
	I0914 01:18:15.167031 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0914 01:18:15.221886 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.810065     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.222124 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.861338     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-bgbkw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-bgbkw" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.222341 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.861518     661 reflector.go:138] object-"kube-system"/"kindnet-token-8vvgq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-8vvgq" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.222553 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.861743     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.222773 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.862004     661 reflector.go:138] object-"kube-system"/"metrics-server-token-726vd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-726vd" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.223002 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.862931     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-92cq9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-92cq9" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.223216 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.863007     661 reflector.go:138] object-"default"/"default-token-7p2wq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-7p2wq" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.223430 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:43 old-k8s-version-610182 kubelet[661]: E0914 01:12:43.902284     661 reflector.go:138] object-"kube-system"/"coredns-token-4q2zj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-4q2zj" is forbidden: User "system:node:old-k8s-version-610182" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-610182' and this object
	W0914 01:18:15.234726 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:47 old-k8s-version-610182 kubelet[661]: E0914 01:12:47.958549     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 01:18:15.234923 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:48 old-k8s-version-610182 kubelet[661]: E0914 01:12:48.502192     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.237810 1662972 logs.go:138] Found kubelet problem: Sep 14 01:12:59 old-k8s-version-610182 kubelet[661]: E0914 01:12:59.140474     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 01:18:15.239760 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:10 old-k8s-version-610182 kubelet[661]: E0914 01:13:10.593934     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.240600 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:11 old-k8s-version-610182 kubelet[661]: E0914 01:13:11.608416     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.240930 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:12 old-k8s-version-610182 kubelet[661]: E0914 01:13:12.613648     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.241117 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:14 old-k8s-version-610182 kubelet[661]: E0914 01:13:14.125122     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.241617 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:17 old-k8s-version-610182 kubelet[661]: E0914 01:13:17.635836     661 pod_workers.go:191] Error syncing pod a28fbbc7-3a81-496e-89e0-9e6d1f672574 ("storage-provisioner_kube-system(a28fbbc7-3a81-496e-89e0-9e6d1f672574)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(a28fbbc7-3a81-496e-89e0-9e6d1f672574)"
	W0914 01:18:15.244534 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:27 old-k8s-version-610182 kubelet[661]: E0914 01:13:27.139662     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 01:18:15.245003 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:27 old-k8s-version-610182 kubelet[661]: E0914 01:13:27.667558     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.245462 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:31 old-k8s-version-610182 kubelet[661]: E0914 01:13:31.709183     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.245648 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:38 old-k8s-version-610182 kubelet[661]: E0914 01:13:38.128130     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.245976 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:43 old-k8s-version-610182 kubelet[661]: E0914 01:13:43.125611     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.246166 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:49 old-k8s-version-610182 kubelet[661]: E0914 01:13:49.125262     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.246754 1662972 logs.go:138] Found kubelet problem: Sep 14 01:13:55 old-k8s-version-610182 kubelet[661]: E0914 01:13:55.743007     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.247081 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:01 old-k8s-version-610182 kubelet[661]: E0914 01:14:01.711789     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.247268 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:02 old-k8s-version-610182 kubelet[661]: E0914 01:14:02.125283     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.249746 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:13 old-k8s-version-610182 kubelet[661]: E0914 01:14:13.143560     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 01:18:15.250079 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:15 old-k8s-version-610182 kubelet[661]: E0914 01:14:15.125566     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.250266 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:24 old-k8s-version-610182 kubelet[661]: E0914 01:14:24.125321     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.250594 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:26 old-k8s-version-610182 kubelet[661]: E0914 01:14:26.124714     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.250778 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:36 old-k8s-version-610182 kubelet[661]: E0914 01:14:36.125255     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.251367 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:40 old-k8s-version-610182 kubelet[661]: E0914 01:14:40.892715     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.251699 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:41 old-k8s-version-610182 kubelet[661]: E0914 01:14:41.896733     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.251890 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:51 old-k8s-version-610182 kubelet[661]: E0914 01:14:51.125970     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.252226 1662972 logs.go:138] Found kubelet problem: Sep 14 01:14:53 old-k8s-version-610182 kubelet[661]: E0914 01:14:53.124967     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.252411 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:03 old-k8s-version-610182 kubelet[661]: E0914 01:15:03.126474     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.252737 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:04 old-k8s-version-610182 kubelet[661]: E0914 01:15:04.124793     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.253063 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:16 old-k8s-version-610182 kubelet[661]: E0914 01:15:16.125211     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.253249 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:18 old-k8s-version-610182 kubelet[661]: E0914 01:15:18.125327     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.253575 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:28 old-k8s-version-610182 kubelet[661]: E0914 01:15:28.124764     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.253759 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:33 old-k8s-version-610182 kubelet[661]: E0914 01:15:33.125218     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.254090 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:41 old-k8s-version-610182 kubelet[661]: E0914 01:15:41.125092     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.256560 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:48 old-k8s-version-610182 kubelet[661]: E0914 01:15:48.133681     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 01:18:15.256889 1662972 logs.go:138] Found kubelet problem: Sep 14 01:15:54 old-k8s-version-610182 kubelet[661]: E0914 01:15:54.124777     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.257074 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:02 old-k8s-version-610182 kubelet[661]: E0914 01:16:02.125622     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.257686 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:08 old-k8s-version-610182 kubelet[661]: E0914 01:16:08.301537     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.258070 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:11 old-k8s-version-610182 kubelet[661]: E0914 01:16:11.707000     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.258290 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:17 old-k8s-version-610182 kubelet[661]: E0914 01:16:17.125469     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.258643 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:24 old-k8s-version-610182 kubelet[661]: E0914 01:16:24.124908     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.258876 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:29 old-k8s-version-610182 kubelet[661]: E0914 01:16:29.125787     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.259248 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:38 old-k8s-version-610182 kubelet[661]: E0914 01:16:38.124719     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.259439 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:41 old-k8s-version-610182 kubelet[661]: E0914 01:16:41.125876     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.259784 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:52 old-k8s-version-610182 kubelet[661]: E0914 01:16:52.125629     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.260030 1662972 logs.go:138] Found kubelet problem: Sep 14 01:16:52 old-k8s-version-610182 kubelet[661]: E0914 01:16:52.127069     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.260398 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:03 old-k8s-version-610182 kubelet[661]: E0914 01:17:03.130808     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.260587 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:07 old-k8s-version-610182 kubelet[661]: E0914 01:17:07.125991     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.260915 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:16 old-k8s-version-610182 kubelet[661]: E0914 01:17:16.128735     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.261104 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:20 old-k8s-version-610182 kubelet[661]: E0914 01:17:20.125870     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.261431 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:27 old-k8s-version-610182 kubelet[661]: E0914 01:17:27.125513     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.261625 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:35 old-k8s-version-610182 kubelet[661]: E0914 01:17:35.127070     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.261951 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:42 old-k8s-version-610182 kubelet[661]: E0914 01:17:42.125510     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.262134 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:49 old-k8s-version-610182 kubelet[661]: E0914 01:17:49.125114     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.262460 1662972 logs.go:138] Found kubelet problem: Sep 14 01:17:54 old-k8s-version-610182 kubelet[661]: E0914 01:17:54.125059     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.262645 1662972 logs.go:138] Found kubelet problem: Sep 14 01:18:00 old-k8s-version-610182 kubelet[661]: E0914 01:18:00.129170     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:15.262970 1662972 logs.go:138] Found kubelet problem: Sep 14 01:18:08 old-k8s-version-610182 kubelet[661]: E0914 01:18:08.124748     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:15.263154 1662972 logs.go:138] Found kubelet problem: Sep 14 01:18:12 old-k8s-version-610182 kubelet[661]: E0914 01:18:12.125811     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0914 01:18:15.263164 1662972 logs.go:123] Gathering logs for kube-proxy [269d5b982d5d4a022dc3577be6403979f7298b0b3fc813ef12e6907953b41c43] ...
	I0914 01:18:15.263179 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 269d5b982d5d4a022dc3577be6403979f7298b0b3fc813ef12e6907953b41c43"
	I0914 01:18:15.310492 1662972 logs.go:123] Gathering logs for kube-controller-manager [61e7b1ed81983f8feb0c4985df8852de5a355a343c2f2bac727eddd38d326e49] ...
	I0914 01:18:15.310529 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 61e7b1ed81983f8feb0c4985df8852de5a355a343c2f2bac727eddd38d326e49"
	I0914 01:18:15.371658 1662972 logs.go:123] Gathering logs for kindnet [90ac534cf83568c45dec31050411849fe6c5da6ad5850b89788bc760ebd183bc] ...
	I0914 01:18:15.371694 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90ac534cf83568c45dec31050411849fe6c5da6ad5850b89788bc760ebd183bc"
	I0914 01:18:15.416411 1662972 logs.go:123] Gathering logs for storage-provisioner [0ee7f8c793d3af17f4104c267467b65d90ef7bc7c809c0c787cfa6261c9a806b] ...
	I0914 01:18:15.416445 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0ee7f8c793d3af17f4104c267467b65d90ef7bc7c809c0c787cfa6261c9a806b"
	I0914 01:18:15.456135 1662972 logs.go:123] Gathering logs for containerd ...
	I0914 01:18:15.456163 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0914 01:18:15.516571 1662972 logs.go:123] Gathering logs for dmesg ...
	I0914 01:18:15.516607 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 01:18:15.539356 1662972 logs.go:123] Gathering logs for coredns [1ccdca51423fa99d2e530031643e1a3f8affe650ed219723b383e28a2eb94bc1] ...
	I0914 01:18:15.539386 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ccdca51423fa99d2e530031643e1a3f8affe650ed219723b383e28a2eb94bc1"
	I0914 01:18:15.588866 1662972 logs.go:123] Gathering logs for kube-scheduler [4c2d74f70880659a29267038a079647917ca2d99bfc511f1bcea43c7917c095d] ...
	I0914 01:18:15.588897 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4c2d74f70880659a29267038a079647917ca2d99bfc511f1bcea43c7917c095d"
	I0914 01:18:15.633070 1662972 logs.go:123] Gathering logs for kube-scheduler [9ee463a5994bbf1596e364b41c96e0062bea0e46e569fa942d3f748c34fcac51] ...
	I0914 01:18:15.633097 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ee463a5994bbf1596e364b41c96e0062bea0e46e569fa942d3f748c34fcac51"
	I0914 01:18:15.684406 1662972 logs.go:123] Gathering logs for kube-controller-manager [9ca7286663fbb643571bb09446c018e26421cf79f88726964da8abb585942833] ...
	I0914 01:18:15.684438 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9ca7286663fbb643571bb09446c018e26421cf79f88726964da8abb585942833"
	I0914 01:18:15.760067 1662972 logs.go:123] Gathering logs for kube-apiserver [a8213321a49b65c4449a49fd8155be47c3e9743c8f95a001adb67b9fbfaa7501] ...
	I0914 01:18:15.760104 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a8213321a49b65c4449a49fd8155be47c3e9743c8f95a001adb67b9fbfaa7501"
	I0914 01:18:15.825673 1662972 logs.go:123] Gathering logs for coredns [031b10e8b5319fea998363415b3b511d9dbdb6b5dcf822b26400d2b3681bb5fe] ...
	I0914 01:18:15.825707 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 031b10e8b5319fea998363415b3b511d9dbdb6b5dcf822b26400d2b3681bb5fe"
	I0914 01:18:15.872296 1662972 logs.go:123] Gathering logs for etcd [f1adefb5809402611e02118e987a812f2bd5acdc1959872d264a2266f122241d] ...
	I0914 01:18:15.872324 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1adefb5809402611e02118e987a812f2bd5acdc1959872d264a2266f122241d"
	I0914 01:18:15.919166 1662972 logs.go:123] Gathering logs for etcd [470db368691dc688bd73d31b5956b04b5f9dbefd4381609d2abb94380494773e] ...
	I0914 01:18:15.919198 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 470db368691dc688bd73d31b5956b04b5f9dbefd4381609d2abb94380494773e"
	I0914 01:18:15.975449 1662972 logs.go:123] Gathering logs for kube-proxy [d4a517b2228e77499d889671df50a45c4521d51fc2cd9e574d45970b0a7b5158] ...
	I0914 01:18:15.975476 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4a517b2228e77499d889671df50a45c4521d51fc2cd9e574d45970b0a7b5158"
	I0914 01:18:16.022259 1662972 logs.go:123] Gathering logs for storage-provisioner [a87d77c89dfdfee2cffaf479b6efc6afa9d07d45268f21126c7d19ec57c7bf8c] ...
	I0914 01:18:16.022295 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a87d77c89dfdfee2cffaf479b6efc6afa9d07d45268f21126c7d19ec57c7bf8c"
	I0914 01:18:16.073521 1662972 logs.go:123] Gathering logs for kubernetes-dashboard [0fffc84db99e3cdc18ff01574902ffd49a1b6d96ad7fc3649f3b141734861d74] ...
	I0914 01:18:16.073552 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0fffc84db99e3cdc18ff01574902ffd49a1b6d96ad7fc3649f3b141734861d74"
	I0914 01:18:16.124517 1662972 logs.go:123] Gathering logs for describe nodes ...
	I0914 01:18:16.124549 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 01:18:16.274978 1662972 logs.go:123] Gathering logs for kube-apiserver [c745816624dd373ada13c555722ba230b7c9e389e3b6d0e4f549f5f67748e6bb] ...
	I0914 01:18:16.275007 1662972 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c745816624dd373ada13c555722ba230b7c9e389e3b6d0e4f549f5f67748e6bb"
	I0914 01:18:16.336006 1662972 out.go:358] Setting ErrFile to fd 2...
	I0914 01:18:16.336039 1662972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0914 01:18:16.336095 1662972 out.go:270] X Problems detected in kubelet:
	W0914 01:18:16.336110 1662972 out.go:270]   Sep 14 01:17:49 old-k8s-version-610182 kubelet[661]: E0914 01:17:49.125114     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:16.336117 1662972 out.go:270]   Sep 14 01:17:54 old-k8s-version-610182 kubelet[661]: E0914 01:17:54.125059     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:16.336137 1662972 out.go:270]   Sep 14 01:18:00 old-k8s-version-610182 kubelet[661]: E0914 01:18:00.129170     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 01:18:16.336145 1662972 out.go:270]   Sep 14 01:18:08 old-k8s-version-610182 kubelet[661]: E0914 01:18:08.124748     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	W0914 01:18:16.336155 1662972 out.go:270]   Sep 14 01:18:12 old-k8s-version-610182 kubelet[661]: E0914 01:18:12.125811     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0914 01:18:16.336162 1662972 out.go:358] Setting ErrFile to fd 2...
	I0914 01:18:16.336168 1662972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:18:19.157095 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:21.158229 1670693 pod_ready.go:103] pod "metrics-server-6867b74b74-qn5ch" in "kube-system" namespace has status "Ready":"False"
	I0914 01:18:26.338001 1662972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 01:18:26.351313 1662972 api_server.go:72] duration metric: took 6m3.152157656s to wait for apiserver process to appear ...
	I0914 01:18:26.351335 1662972 api_server.go:88] waiting for apiserver healthz status ...
	I0914 01:18:26.354233 1662972 out.go:201] 
	W0914 01:18:26.356293 1662972 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0914 01:18:26.356316 1662972 out.go:270] * 
	W0914 01:18:26.357206 1662972 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 01:18:26.359110 1662972 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	2e3a9ad4703a7       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   986f346aedf97       dashboard-metrics-scraper-8d5bb5db8-ppxd2
	0ee7f8c793d3a       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   fa7eed352934b       storage-provisioner
	0fffc84db99e3       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   5ded10f7911e2       kubernetes-dashboard-cd95d586-j9r9r
	0baa6680fbdc7       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   6fda9fda696f9       kindnet-k4mks
	269d5b982d5d4       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   64c857d4048fb       kube-proxy-vmn48
	262c5e2889a1a       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   1673030e609db       busybox
	a87d77c89dfdf       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   fa7eed352934b       storage-provisioner
	031b10e8b5319       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   b28e16d01ca3e       coredns-74ff55c5b-kbzrq
	4c2d74f708806       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   7f4dd10254eb8       kube-scheduler-old-k8s-version-610182
	f1adefb580940       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   59f652edf220c       etcd-old-k8s-version-610182
	a8213321a49b6       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   b21dc7e091746       kube-apiserver-old-k8s-version-610182
	61e7b1ed81983       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   0a702bdd7f237       kube-controller-manager-old-k8s-version-610182
	51bda8710bdcb       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   9e12b9bf261cc       busybox
	1ccdca51423fa       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   588c920d95777       coredns-74ff55c5b-kbzrq
	90ac534cf8356       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   28afd50f1769c       kindnet-k4mks
	d4a517b2228e7       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   63a59a072a1b4       kube-proxy-vmn48
	c745816624dd3       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   edf50f8198c2b       kube-apiserver-old-k8s-version-610182
	9ca7286663fbb       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   3b41becead08c       kube-controller-manager-old-k8s-version-610182
	9ee463a5994bb       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   23b48cf39ecac       kube-scheduler-old-k8s-version-610182
	470db368691dc       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   f31ba19dd52b7       etcd-old-k8s-version-610182
	
	
	==> containerd <==
	Sep 14 01:14:13 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:14:13.138608793Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 14 01:14:13 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:14:13.138717535Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 14 01:14:40 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:14:40.132197305Z" level=info msg="CreateContainer within sandbox \"986f346aedf9768e369faa77929f33f874344e034966c7ef6e60cddf410ec4bc\" for container name:\"dashboard-metrics-scraper\"  attempt:4"
	Sep 14 01:14:40 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:14:40.148655455Z" level=info msg="CreateContainer within sandbox \"986f346aedf9768e369faa77929f33f874344e034966c7ef6e60cddf410ec4bc\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"d068ed988e047ed74de1009129f5afe367d9d5ea0538e66289109c9391370871\""
	Sep 14 01:14:40 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:14:40.149445877Z" level=info msg="StartContainer for \"d068ed988e047ed74de1009129f5afe367d9d5ea0538e66289109c9391370871\""
	Sep 14 01:14:40 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:14:40.234002023Z" level=info msg="StartContainer for \"d068ed988e047ed74de1009129f5afe367d9d5ea0538e66289109c9391370871\" returns successfully"
	Sep 14 01:14:40 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:14:40.259166912Z" level=info msg="shim disconnected" id=d068ed988e047ed74de1009129f5afe367d9d5ea0538e66289109c9391370871 namespace=k8s.io
	Sep 14 01:14:40 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:14:40.259229296Z" level=warning msg="cleaning up after shim disconnected" id=d068ed988e047ed74de1009129f5afe367d9d5ea0538e66289109c9391370871 namespace=k8s.io
	Sep 14 01:14:40 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:14:40.259240627Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 14 01:14:40 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:14:40.273691325Z" level=warning msg="cleanup warnings time=\"2024-09-14T01:14:40Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Sep 14 01:14:40 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:14:40.894312698Z" level=info msg="RemoveContainer for \"20d17c5701b4ce629288e7bfc3bb5e4976fee1d1f8b652a1b30edd22a8fceb16\""
	Sep 14 01:14:40 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:14:40.905599245Z" level=info msg="RemoveContainer for \"20d17c5701b4ce629288e7bfc3bb5e4976fee1d1f8b652a1b30edd22a8fceb16\" returns successfully"
	Sep 14 01:15:48 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:15:48.125700602Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 01:15:48 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:15:48.131282493Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 14 01:15:48 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:15:48.133130856Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 14 01:15:48 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:15:48.133231722Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 14 01:16:08 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:16:08.126846778Z" level=info msg="CreateContainer within sandbox \"986f346aedf9768e369faa77929f33f874344e034966c7ef6e60cddf410ec4bc\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 14 01:16:08 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:16:08.141147428Z" level=info msg="CreateContainer within sandbox \"986f346aedf9768e369faa77929f33f874344e034966c7ef6e60cddf410ec4bc\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789\""
	Sep 14 01:16:08 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:16:08.141639816Z" level=info msg="StartContainer for \"2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789\""
	Sep 14 01:16:08 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:16:08.217640339Z" level=info msg="StartContainer for \"2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789\" returns successfully"
	Sep 14 01:16:08 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:16:08.242185110Z" level=info msg="shim disconnected" id=2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789 namespace=k8s.io
	Sep 14 01:16:08 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:16:08.242406336Z" level=warning msg="cleaning up after shim disconnected" id=2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789 namespace=k8s.io
	Sep 14 01:16:08 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:16:08.242429671Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 14 01:16:08 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:16:08.303670822Z" level=info msg="RemoveContainer for \"d068ed988e047ed74de1009129f5afe367d9d5ea0538e66289109c9391370871\""
	Sep 14 01:16:08 old-k8s-version-610182 containerd[568]: time="2024-09-14T01:16:08.309959189Z" level=info msg="RemoveContainer for \"d068ed988e047ed74de1009129f5afe367d9d5ea0538e66289109c9391370871\" returns successfully"
	
	
	==> coredns [031b10e8b5319fea998363415b3b511d9dbdb6b5dcf822b26400d2b3681bb5fe] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] 127.0.0.1:42988 - 1991 "HINFO IN 1608329308612185535.2745716893906179397. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012591545s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0914 01:13:16.703153       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-14 01:12:46.702591228 +0000 UTC m=+0.045472216) (total time: 30.000460835s):
	Trace[2019727887]: [30.000460835s] [30.000460835s] END
	E0914 01:13:16.703259       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0914 01:13:16.703379       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-14 01:12:46.703051985 +0000 UTC m=+0.045932965) (total time: 30.000313177s):
	Trace[939984059]: [30.000313177s] [30.000313177s] END
	E0914 01:13:16.703424       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0914 01:13:16.704143       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-14 01:12:46.703253101 +0000 UTC m=+0.046134080) (total time: 30.000863428s):
	Trace[911902081]: [30.000863428s] [30.000863428s] END
	E0914 01:13:16.704158       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [1ccdca51423fa99d2e530031643e1a3f8affe650ed219723b383e28a2eb94bc1] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:57975 - 17205 "HINFO IN 5412341697194991559.725399177251395617. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.014299564s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-610182
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-610182
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=old-k8s-version-610182
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T01_10_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 01:10:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-610182
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 01:18:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 01:13:34 +0000   Sat, 14 Sep 2024 01:09:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 01:13:34 +0000   Sat, 14 Sep 2024 01:09:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 01:13:34 +0000   Sat, 14 Sep 2024 01:09:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 01:13:34 +0000   Sat, 14 Sep 2024 01:10:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-610182
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 6938b1708bc94f548b723532f4abbdae
	  System UUID:                13e48b43-7710-4997-82a4-37ac529ce333
	  Boot ID:                    31d76137-2e5d-4866-b75b-16f7e69e7ff6
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 coredns-74ff55c5b-kbzrq                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m10s
	  kube-system                 etcd-old-k8s-version-610182                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m16s
	  kube-system                 kindnet-k4mks                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m9s
	  kube-system                 kube-apiserver-old-k8s-version-610182             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-controller-manager-old-k8s-version-610182    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 kube-proxy-vmn48                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 kube-scheduler-old-k8s-version-610182             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m16s
	  kube-system                 metrics-server-9975d5f86-ncmqs                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m26s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-ppxd2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-j9r9r               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m35s (x5 over 8m35s)  kubelet     Node old-k8s-version-610182 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m35s (x5 over 8m35s)  kubelet     Node old-k8s-version-610182 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m35s (x4 over 8m35s)  kubelet     Node old-k8s-version-610182 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m17s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m16s                  kubelet     Node old-k8s-version-610182 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m16s                  kubelet     Node old-k8s-version-610182 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m16s                  kubelet     Node old-k8s-version-610182 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m16s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m9s                   kubelet     Node old-k8s-version-610182 status is now: NodeReady
	  Normal  Starting                 8m8s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m57s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m57s (x8 over 5m57s)  kubelet     Node old-k8s-version-610182 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s (x8 over 5m57s)  kubelet     Node old-k8s-version-610182 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s (x7 over 5m57s)  kubelet     Node old-k8s-version-610182 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m41s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [470db368691dc688bd73d31b5956b04b5f9dbefd4381609d2abb94380494773e] <==
	2024-09-14 01:09:53.899771 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/09/14 01:09:53 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/09/14 01:09:53 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/09/14 01:09:53 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/09/14 01:09:53 INFO: ea7e25599daad906 became leader at term 2
	raft2024/09/14 01:09:53 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-09-14 01:09:53.988755 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-14 01:09:53.991193 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-14 01:09:53.991407 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-14 01:09:53.991682 I | etcdserver: published {Name:old-k8s-version-610182 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-09-14 01:09:53.991774 I | embed: ready to serve client requests
	2024-09-14 01:09:53.991927 I | embed: ready to serve client requests
	2024-09-14 01:09:53.993431 I | embed: serving client requests on 192.168.76.2:2379
	2024-09-14 01:09:53.993618 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-14 01:10:19.574602 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:10:25.854987 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:10:35.854955 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:10:45.854963 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:10:55.854976 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:11:05.854938 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:11:15.855072 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:11:25.854944 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:11:35.854949 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:11:45.854838 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:11:55.854949 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [f1adefb5809402611e02118e987a812f2bd5acdc1959872d264a2266f122241d] <==
	2024-09-14 01:14:24.790947 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:14:34.791245 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:14:44.792360 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:14:54.791137 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:15:04.790872 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:15:14.790895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:15:24.790974 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:15:34.790957 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:15:44.790962 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:15:54.790914 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:16:04.791038 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:16:14.790979 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:16:24.790860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:16:34.790919 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:16:44.790906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:16:54.790871 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:17:04.790896 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:17:14.790871 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:17:24.790969 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:17:34.790991 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:17:44.790928 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:17:54.790891 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:18:04.791034 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:18:14.791020 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 01:18:24.791013 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 01:18:28 up  9:00,  0 users,  load average: 0.74, 1.76, 2.36
	Linux old-k8s-version-610182 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [0baa6680fbdc78d3ce9ed5a376321b45078166580f6d09f833f0a479f6555f1f] <==
	I0914 01:16:27.702937       1 main.go:299] handling current node
	I0914 01:16:37.705370       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:16:37.705403       1 main.go:299] handling current node
	I0914 01:16:47.697159       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:16:47.697195       1 main.go:299] handling current node
	I0914 01:16:57.702669       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:16:57.702704       1 main.go:299] handling current node
	I0914 01:17:07.703453       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:17:07.703554       1 main.go:299] handling current node
	I0914 01:17:17.705549       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:17:17.705586       1 main.go:299] handling current node
	I0914 01:17:27.702847       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:17:27.702891       1 main.go:299] handling current node
	I0914 01:17:37.705411       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:17:37.705447       1 main.go:299] handling current node
	I0914 01:17:47.697205       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:17:47.697244       1 main.go:299] handling current node
	I0914 01:17:57.699952       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:17:57.699987       1 main.go:299] handling current node
	I0914 01:18:07.705338       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:18:07.705375       1 main.go:299] handling current node
	I0914 01:18:17.705378       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:18:17.705594       1 main.go:299] handling current node
	I0914 01:18:27.703086       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:18:27.703119       1 main.go:299] handling current node
	
	
	==> kindnet [90ac534cf83568c45dec31050411849fe6c5da6ad5850b89788bc760ebd183bc] <==
	I0914 01:10:22.224784       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0914 01:10:22.633339       1 controller.go:334] Starting controller kube-network-policies
	I0914 01:10:22.633367       1 controller.go:338] Waiting for informer caches to sync
	I0914 01:10:22.633374       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0914 01:10:22.833521       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0914 01:10:22.833552       1 metrics.go:61] Registering metrics
	I0914 01:10:22.833738       1 controller.go:374] Syncing nftables rules
	I0914 01:10:32.641895       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:10:32.641933       1 main.go:299] handling current node
	I0914 01:10:42.632983       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:10:42.633028       1 main.go:299] handling current node
	I0914 01:10:52.640585       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:10:52.640624       1 main.go:299] handling current node
	I0914 01:11:02.640582       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:11:02.640622       1 main.go:299] handling current node
	I0914 01:11:12.632742       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:11:12.632780       1 main.go:299] handling current node
	I0914 01:11:22.633475       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:11:22.633511       1 main.go:299] handling current node
	I0914 01:11:32.635966       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:11:32.635998       1 main.go:299] handling current node
	I0914 01:11:42.635959       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:11:42.636053       1 main.go:299] handling current node
	I0914 01:11:52.632801       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 01:11:52.632837       1 main.go:299] handling current node
	
	
	==> kube-apiserver [a8213321a49b65c4449a49fd8155be47c3e9743c8f95a001adb67b9fbfaa7501] <==
	I0914 01:15:25.036823       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 01:15:25.036833       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0914 01:15:47.188577       1 handler_proxy.go:102] no RequestInfo found in the context
	E0914 01:15:47.188649       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 01:15:47.188659       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:15:59.513723       1 client.go:360] parsed scheme: "passthrough"
	I0914 01:15:59.513767       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 01:15:59.513776       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0914 01:16:30.868291       1 client.go:360] parsed scheme: "passthrough"
	I0914 01:16:30.868335       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 01:16:30.868343       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0914 01:17:04.205992       1 client.go:360] parsed scheme: "passthrough"
	I0914 01:17:04.206135       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 01:17:04.206177       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0914 01:17:44.731363       1 handler_proxy.go:102] no RequestInfo found in the context
	E0914 01:17:44.731613       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 01:17:44.731632       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 01:17:48.922181       1 client.go:360] parsed scheme: "passthrough"
	I0914 01:17:48.922226       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 01:17:48.922235       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0914 01:18:28.817087       1 client.go:360] parsed scheme: "passthrough"
	I0914 01:18:28.817129       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 01:18:28.817138       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [c745816624dd373ada13c555722ba230b7c9e389e3b6d0e4f549f5f67748e6bb] <==
	I0914 01:10:01.085633       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0914 01:10:01.085814       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0914 01:10:01.148265       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0914 01:10:01.153512       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0914 01:10:01.153536       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0914 01:10:01.601172       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 01:10:01.653631       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0914 01:10:01.756010       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0914 01:10:01.757325       1 controller.go:606] quota admission added evaluator for: endpoints
	I0914 01:10:01.762202       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 01:10:02.934436       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0914 01:10:03.460669       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0914 01:10:03.508080       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0914 01:10:11.933795       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 01:10:18.912594       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0914 01:10:18.919102       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0914 01:10:37.785381       1 client.go:360] parsed scheme: "passthrough"
	I0914 01:10:37.785425       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 01:10:37.785433       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0914 01:11:09.967831       1 client.go:360] parsed scheme: "passthrough"
	I0914 01:11:09.968038       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 01:11:09.968096       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0914 01:11:44.622480       1 client.go:360] parsed scheme: "passthrough"
	I0914 01:11:44.622524       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 01:11:44.622533       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [61e7b1ed81983f8feb0c4985df8852de5a355a343c2f2bac727eddd38d326e49] <==
	I0914 01:14:08.554822       1 request.go:655] Throttling request took 1.048118428s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0914 01:14:09.406365       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 01:14:37.095952       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 01:14:41.056875       1 request.go:655] Throttling request took 1.043442998s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0914 01:14:41.908188       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 01:15:07.597732       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 01:15:13.558582       1 request.go:655] Throttling request took 1.048022682s, request: GET:https://192.168.76.2:8443/apis/autoscaling/v1?timeout=32s
	W0914 01:15:14.409991       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 01:15:38.099532       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 01:15:46.060393       1 request.go:655] Throttling request took 1.039992059s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0914 01:15:46.911922       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 01:16:08.601764       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 01:16:18.567982       1 request.go:655] Throttling request took 1.053885981s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W0914 01:16:19.413693       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 01:16:39.103820       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 01:16:51.064127       1 request.go:655] Throttling request took 1.045740038s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0914 01:16:51.915770       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 01:17:09.605819       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 01:17:23.566396       1 request.go:655] Throttling request took 1.048453124s, request: GET:https://192.168.76.2:8443/apis/batch/v1?timeout=32s
	W0914 01:17:24.417832       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 01:17:40.107740       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 01:17:56.068339       1 request.go:655] Throttling request took 1.047874307s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W0914 01:17:56.919751       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 01:18:10.609588       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 01:18:28.572910       1 request.go:655] Throttling request took 1.048174373s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	
	
	==> kube-controller-manager [9ca7286663fbb643571bb09446c018e26421cf79f88726964da8abb585942833] <==
	I0914 01:10:18.935679       1 shared_informer.go:247] Caches are synced for endpoint 
	I0914 01:10:19.003293       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-kbzrq"
	I0914 01:10:19.005734       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vmn48"
	I0914 01:10:19.007760       1 shared_informer.go:247] Caches are synced for taint 
	I0914 01:10:19.014134       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0914 01:10:19.019748       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0914 01:10:19.023420       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-610182. Assuming now as a timestamp.
	I0914 01:10:19.024083       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0914 01:10:19.024728       1 event.go:291] "Event occurred" object="old-k8s-version-610182" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-610182 event: Registered Node old-k8s-version-610182 in Controller"
	I0914 01:10:19.031351       1 range_allocator.go:373] Set node old-k8s-version-610182 PodCIDR to [10.244.0.0/24]
	I0914 01:10:19.103348       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-twq5s"
	I0914 01:10:19.115161       1 shared_informer.go:247] Caches are synced for resource quota 
	I0914 01:10:19.159866       1 shared_informer.go:247] Caches are synced for resource quota 
	I0914 01:10:19.161894       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-k4mks"
	I0914 01:10:19.345993       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E0914 01:10:19.362661       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"e130847e-83ad-4680-b916-0585da583d45", ResourceVersion:"259", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63861873003, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001bd0220), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001bd0240)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001bd0260), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x400162fe00), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001bd0
280), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001bd02a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001bd02e0)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001664180), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000d9fec8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000431960), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001800c30)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000d9ff18)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0914 01:10:19.447042       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"8751c078-761c-40ec-a9ae-9c120b7faef7", ResourceVersion:"399", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63861873004, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000e1d5e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000e1d600)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000e1d620), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000e1d640)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4000e1d660), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generatio
n:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000e1d680), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:
(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000e1d6a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlo
ckStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CS
I:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000e1d6c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Q
uobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000e1d740)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000e1d800)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i
:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", Sub
Path:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40013586c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40015cfba8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x400008b1f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinit
y:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400076d2b8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40015cfbf0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v
1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I0914 01:10:19.546160       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0914 01:10:19.556984       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0914 01:10:19.557007       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0914 01:10:20.483902       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0914 01:10:20.607574       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-twq5s"
	I0914 01:10:24.024284       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0914 01:12:01.022316       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0914 01:12:01.194883       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [269d5b982d5d4a022dc3577be6403979f7298b0b3fc813ef12e6907953b41c43] <==
	I0914 01:12:47.384385       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0914 01:12:47.384715       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0914 01:12:47.464348       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0914 01:12:47.464438       1 server_others.go:185] Using iptables Proxier.
	I0914 01:12:47.464644       1 server.go:650] Version: v1.20.0
	I0914 01:12:47.465122       1 config.go:315] Starting service config controller
	I0914 01:12:47.465130       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0914 01:12:47.466136       1 config.go:224] Starting endpoint slice config controller
	I0914 01:12:47.466144       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0914 01:12:47.565267       1 shared_informer.go:247] Caches are synced for service config 
	I0914 01:12:47.566249       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [d4a517b2228e77499d889671df50a45c4521d51fc2cd9e574d45970b0a7b5158] <==
	I0914 01:10:19.970953       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0914 01:10:19.971042       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0914 01:10:20.032341       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0914 01:10:20.032512       1 server_others.go:185] Using iptables Proxier.
	I0914 01:10:20.034096       1 server.go:650] Version: v1.20.0
	I0914 01:10:20.037552       1 config.go:315] Starting service config controller
	I0914 01:10:20.037593       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0914 01:10:20.037631       1 config.go:224] Starting endpoint slice config controller
	I0914 01:10:20.037635       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0914 01:10:20.138407       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0914 01:10:20.138475       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [4c2d74f70880659a29267038a079647917ca2d99bfc511f1bcea43c7917c095d] <==
	I0914 01:12:36.602233       1 serving.go:331] Generated self-signed cert in-memory
	W0914 01:12:43.691403       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 01:12:43.696678       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 01:12:43.696714       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 01:12:43.696720       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 01:12:43.833913       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0914 01:12:43.841223       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 01:12:43.841249       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 01:12:43.841268       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0914 01:12:44.144140       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [9ee463a5994bbf1596e364b41c96e0062bea0e46e569fa942d3f748c34fcac51] <==
	I0914 01:09:55.770507       1 serving.go:331] Generated self-signed cert in-memory
	W0914 01:10:00.476745       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 01:10:00.476856       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 01:10:00.476889       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 01:10:00.476935       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 01:10:00.581610       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0914 01:10:00.581767       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 01:10:00.589109       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 01:10:00.581807       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0914 01:10:00.612384       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 01:10:00.612787       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 01:10:00.613068       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 01:10:00.613401       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 01:10:00.614147       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 01:10:00.614431       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 01:10:00.614663       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 01:10:00.614924       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 01:10:00.615196       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 01:10:00.615428       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 01:10:00.615628       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 01:10:00.616057       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0914 01:10:01.789391       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 14 01:16:41 old-k8s-version-610182 kubelet[661]: E0914 01:16:41.125876     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 01:16:52 old-k8s-version-610182 kubelet[661]: I0914 01:16:52.124381     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789
	Sep 14 01:16:52 old-k8s-version-610182 kubelet[661]: E0914 01:16:52.125629     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	Sep 14 01:16:52 old-k8s-version-610182 kubelet[661]: E0914 01:16:52.127069     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 01:17:03 old-k8s-version-610182 kubelet[661]: I0914 01:17:03.130410     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789
	Sep 14 01:17:03 old-k8s-version-610182 kubelet[661]: E0914 01:17:03.130808     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	Sep 14 01:17:07 old-k8s-version-610182 kubelet[661]: E0914 01:17:07.125991     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 01:17:16 old-k8s-version-610182 kubelet[661]: I0914 01:17:16.127058     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789
	Sep 14 01:17:16 old-k8s-version-610182 kubelet[661]: E0914 01:17:16.128735     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	Sep 14 01:17:20 old-k8s-version-610182 kubelet[661]: E0914 01:17:20.125870     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 01:17:27 old-k8s-version-610182 kubelet[661]: I0914 01:17:27.124584     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789
	Sep 14 01:17:27 old-k8s-version-610182 kubelet[661]: E0914 01:17:27.125513     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	Sep 14 01:17:35 old-k8s-version-610182 kubelet[661]: E0914 01:17:35.127070     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 01:17:42 old-k8s-version-610182 kubelet[661]: I0914 01:17:42.124486     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789
	Sep 14 01:17:42 old-k8s-version-610182 kubelet[661]: E0914 01:17:42.125510     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	Sep 14 01:17:49 old-k8s-version-610182 kubelet[661]: E0914 01:17:49.125114     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 01:17:54 old-k8s-version-610182 kubelet[661]: I0914 01:17:54.124707     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789
	Sep 14 01:17:54 old-k8s-version-610182 kubelet[661]: E0914 01:17:54.125059     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	Sep 14 01:18:00 old-k8s-version-610182 kubelet[661]: E0914 01:18:00.129170     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 01:18:08 old-k8s-version-610182 kubelet[661]: I0914 01:18:08.124406     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789
	Sep 14 01:18:08 old-k8s-version-610182 kubelet[661]: E0914 01:18:08.124748     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	Sep 14 01:18:12 old-k8s-version-610182 kubelet[661]: E0914 01:18:12.125811     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 01:18:22 old-k8s-version-610182 kubelet[661]: I0914 01:18:22.124449     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2e3a9ad4703a7b222bc2672a1977f45bdcea7c94d7fc49a6c9e2d27b05319789
	Sep 14 01:18:22 old-k8s-version-610182 kubelet[661]: E0914 01:18:22.124795     661 pod_workers.go:191] Error syncing pod 5bf87d45-b0bc-4d9e-8d76-3f43adad0670 ("dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-ppxd2_kubernetes-dashboard(5bf87d45-b0bc-4d9e-8d76-3f43adad0670)"
	Sep 14 01:18:24 old-k8s-version-610182 kubelet[661]: E0914 01:18:24.125187     661 pod_workers.go:191] Error syncing pod 5742bd3e-091b-4aa0-a58d-16fc3f044531 ("metrics-server-9975d5f86-ncmqs_kube-system(5742bd3e-091b-4aa0-a58d-16fc3f044531)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [0fffc84db99e3cdc18ff01574902ffd49a1b6d96ad7fc3649f3b141734861d74] <==
	2024/09/14 01:13:13 Using namespace: kubernetes-dashboard
	2024/09/14 01:13:13 Using in-cluster config to connect to apiserver
	2024/09/14 01:13:13 Using secret token for csrf signing
	2024/09/14 01:13:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/14 01:13:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/14 01:13:13 Successful initial request to the apiserver, version: v1.20.0
	2024/09/14 01:13:13 Generating JWE encryption key
	2024/09/14 01:13:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/14 01:13:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/14 01:13:13 Initializing JWE encryption key from synchronized object
	2024/09/14 01:13:13 Creating in-cluster Sidecar client
	2024/09/14 01:13:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 01:13:13 Serving insecurely on HTTP port: 9090
	2024/09/14 01:13:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 01:14:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 01:14:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 01:15:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 01:15:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 01:16:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 01:16:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 01:17:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 01:17:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 01:18:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 01:13:13 Starting overwatch
	
	
	==> storage-provisioner [0ee7f8c793d3af17f4104c267467b65d90ef7bc7c809c0c787cfa6261c9a806b] <==
	I0914 01:13:31.260086       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 01:13:31.305566       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 01:13:31.305650       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 01:13:48.785055       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 01:13:48.785297       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8668b309-a446-415e-8a59-9236a39e2d77", APIVersion:"v1", ResourceVersion:"844", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-610182_6b142a15-ebe7-4aab-8cca-f5ee141d0c1e became leader
	I0914 01:13:48.785641       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-610182_6b142a15-ebe7-4aab-8cca-f5ee141d0c1e!
	I0914 01:13:48.886103       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-610182_6b142a15-ebe7-4aab-8cca-f5ee141d0c1e!
	
	
	==> storage-provisioner [a87d77c89dfdfee2cffaf479b6efc6afa9d07d45268f21126c7d19ec57c7bf8c] <==
	I0914 01:12:46.856321       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0914 01:13:16.858268       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-610182 -n old-k8s-version-610182
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-610182 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-ncmqs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-610182 describe pod metrics-server-9975d5f86-ncmqs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-610182 describe pod metrics-server-9975d5f86-ncmqs: exit status 1 (102.730894ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-ncmqs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-610182 describe pod metrics-server-9975d5f86-ncmqs: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (376.30s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.62
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.12
9 TestDownloadOnly/v1.20.0/DeleteAll 0.25
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 5.54
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 216.67
31 TestAddons/serial/GCPAuth/Namespaces 0.2
33 TestAddons/parallel/Registry 16.42
34 TestAddons/parallel/Ingress 20.4
35 TestAddons/parallel/InspektorGadget 10.98
36 TestAddons/parallel/MetricsServer 6.8
39 TestAddons/parallel/CSI 53.83
40 TestAddons/parallel/Headlamp 15.92
41 TestAddons/parallel/CloudSpanner 5.82
42 TestAddons/parallel/LocalPath 10.5
43 TestAddons/parallel/NvidiaDevicePlugin 6.63
44 TestAddons/parallel/Yakd 10.86
45 TestAddons/StoppedEnableDisable 12.31
46 TestCertOptions 39.06
47 TestCertExpiration 226.8
49 TestForceSystemdFlag 43.6
50 TestForceSystemdEnv 51.45
51 TestDockerEnvContainerd 46.37
56 TestErrorSpam/setup 30.88
57 TestErrorSpam/start 0.76
58 TestErrorSpam/status 1.11
59 TestErrorSpam/pause 1.87
60 TestErrorSpam/unpause 1.72
61 TestErrorSpam/stop 1.39
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 77.81
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.18
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.19
73 TestFunctional/serial/CacheCmd/cache/add_local 1.26
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.91
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.15
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 42.68
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.71
84 TestFunctional/serial/LogsFileCmd 1.75
85 TestFunctional/serial/InvalidService 4.88
87 TestFunctional/parallel/ConfigCmd 0.43
88 TestFunctional/parallel/DashboardCmd 8.37
89 TestFunctional/parallel/DryRun 0.43
90 TestFunctional/parallel/InternationalLanguage 0.28
91 TestFunctional/parallel/StatusCmd 1.02
95 TestFunctional/parallel/ServiceCmdConnect 9.68
96 TestFunctional/parallel/AddonsCmd 0.4
97 TestFunctional/parallel/PersistentVolumeClaim 23.14
99 TestFunctional/parallel/SSHCmd 0.68
100 TestFunctional/parallel/CpCmd 2.26
102 TestFunctional/parallel/FileSync 0.37
103 TestFunctional/parallel/CertSync 2.19
107 TestFunctional/parallel/NodeLabels 0.1
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.76
111 TestFunctional/parallel/License 0.31
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.4
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
125 TestFunctional/parallel/ProfileCmd/profile_list 0.37
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
127 TestFunctional/parallel/MountCmd/any-port 8.24
128 TestFunctional/parallel/ServiceCmd/List 0.57
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
131 TestFunctional/parallel/ServiceCmd/Format 0.43
132 TestFunctional/parallel/ServiceCmd/URL 0.39
133 TestFunctional/parallel/MountCmd/specific-port 2.06
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.77
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.27
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.32
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.86
142 TestFunctional/parallel/ImageCommands/Setup 0.74
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.21
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.56
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.39
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.52
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.46
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 112.96
160 TestMultiControlPlane/serial/DeployApp 34.18
161 TestMultiControlPlane/serial/PingHostFromPods 1.59
162 TestMultiControlPlane/serial/AddWorkerNode 22.63
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.77
165 TestMultiControlPlane/serial/CopyFile 19.06
166 TestMultiControlPlane/serial/StopSecondaryNode 12.82
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
168 TestMultiControlPlane/serial/RestartSecondaryNode 19.02
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.75
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 134.08
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.52
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
173 TestMultiControlPlane/serial/StopCluster 36
174 TestMultiControlPlane/serial/RestartCluster 66.78
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.6
176 TestMultiControlPlane/serial/AddSecondaryNode 43.16
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.75
181 TestJSONOutput/start/Command 90.87
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.78
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.69
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.84
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 42.09
207 TestKicCustomNetwork/use_default_bridge_network 31.14
208 TestKicExistingNetwork 37.71
209 TestKicCustomSubnet 32.44
210 TestKicStaticIP 31.96
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 67.28
215 TestMountStart/serial/StartWithMountFirst 8.63
216 TestMountStart/serial/VerifyMountFirst 0.26
217 TestMountStart/serial/StartWithMountSecond 6.05
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.64
220 TestMountStart/serial/VerifyMountPostDelete 0.31
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.61
223 TestMountStart/serial/VerifyMountPostStop 0.25
226 TestMultiNode/serial/FreshStart2Nodes 64.55
227 TestMultiNode/serial/DeployApp2Nodes 17.87
228 TestMultiNode/serial/PingHostFrom2Pods 0.97
229 TestMultiNode/serial/AddNode 19.68
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.31
232 TestMultiNode/serial/CopyFile 9.93
233 TestMultiNode/serial/StopNode 2.25
234 TestMultiNode/serial/StartAfterStop 9.86
235 TestMultiNode/serial/RestartKeepsNodes 98.07
236 TestMultiNode/serial/DeleteNode 5.5
237 TestMultiNode/serial/StopMultiNode 24.06
238 TestMultiNode/serial/RestartMultiNode 51.73
239 TestMultiNode/serial/ValidateNameConflict 31.99
244 TestPreload 110.68
246 TestScheduledStopUnix 108.15
249 TestInsufficientStorage 13.43
250 TestRunningBinaryUpgrade 94.36
252 TestKubernetesUpgrade 354.12
253 TestMissingContainerUpgrade 180.2
255 TestPause/serial/Start 62.52
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
258 TestNoKubernetes/serial/StartWithK8s 41.16
259 TestNoKubernetes/serial/StartWithStopK8s 17.63
260 TestNoKubernetes/serial/Start 9.24
261 TestPause/serial/SecondStartNoReconfiguration 6.94
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
263 TestNoKubernetes/serial/ProfileList 1.08
264 TestPause/serial/Pause 0.9
265 TestNoKubernetes/serial/Stop 1.27
266 TestPause/serial/VerifyStatus 0.3
267 TestPause/serial/Unpause 0.89
268 TestNoKubernetes/serial/StartNoArgs 7.2
269 TestPause/serial/PauseAgain 1.17
270 TestPause/serial/DeletePaused 2.65
271 TestPause/serial/VerifyDeletedResources 0.45
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
273 TestStoppedBinaryUpgrade/Setup 0.62
274 TestStoppedBinaryUpgrade/Upgrade 97.43
275 TestStoppedBinaryUpgrade/MinikubeLogs 1
290 TestNetworkPlugins/group/false 4.81
295 TestStartStop/group/old-k8s-version/serial/FirstStart 154.26
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.62
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.68
298 TestStartStop/group/old-k8s-version/serial/Stop 12.13
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
302 TestStartStop/group/no-preload/serial/FirstStart 80.07
303 TestStartStop/group/no-preload/serial/DeployApp 8.37
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.34
305 TestStartStop/group/no-preload/serial/Stop 12.06
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/no-preload/serial/SecondStart 281.37
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
310 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
312 TestStartStop/group/old-k8s-version/serial/Pause 2.98
313 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.12
315 TestStartStop/group/embed-certs/serial/FirstStart 93.14
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
317 TestStartStop/group/no-preload/serial/Pause 4.56
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 96.33
320 TestStartStop/group/embed-certs/serial/DeployApp 8.45
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.23
322 TestStartStop/group/embed-certs/serial/Stop 12.08
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.34
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
325 TestStartStop/group/embed-certs/serial/SecondStart 266.94
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.28
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 303.08
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
333 TestStartStop/group/embed-certs/serial/Pause 3.27
335 TestStartStop/group/newest-cni/serial/FirstStart 37.15
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.07
339 TestStartStop/group/newest-cni/serial/Stop 1.26
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
341 TestStartStop/group/newest-cni/serial/SecondStart 22.7
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.45
345 TestNetworkPlugins/group/auto/Start 98.62
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.45
349 TestStartStop/group/newest-cni/serial/Pause 3.62
350 TestNetworkPlugins/group/kindnet/Start 86.24
351 TestNetworkPlugins/group/auto/KubeletFlags 0.4
352 TestNetworkPlugins/group/auto/NetCatPod 9.29
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
355 TestNetworkPlugins/group/kindnet/NetCatPod 8.26
356 TestNetworkPlugins/group/auto/DNS 0.21
357 TestNetworkPlugins/group/auto/Localhost 0.17
358 TestNetworkPlugins/group/auto/HairPin 0.17
359 TestNetworkPlugins/group/kindnet/DNS 0.25
360 TestNetworkPlugins/group/kindnet/Localhost 0.21
361 TestNetworkPlugins/group/kindnet/HairPin 0.23
362 TestNetworkPlugins/group/calico/Start 72.35
363 TestNetworkPlugins/group/custom-flannel/Start 60.11
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.33
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.27
368 TestNetworkPlugins/group/calico/NetCatPod 10.27
369 TestNetworkPlugins/group/custom-flannel/DNS 0.3
370 TestNetworkPlugins/group/custom-flannel/Localhost 0.22
371 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
372 TestNetworkPlugins/group/calico/DNS 0.31
373 TestNetworkPlugins/group/calico/Localhost 0.25
374 TestNetworkPlugins/group/calico/HairPin 0.24
375 TestNetworkPlugins/group/enable-default-cni/Start 53.32
376 TestNetworkPlugins/group/flannel/Start 56.89
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
384 TestNetworkPlugins/group/flannel/NetCatPod 11.38
385 TestNetworkPlugins/group/flannel/DNS 0.24
386 TestNetworkPlugins/group/flannel/Localhost 0.29
387 TestNetworkPlugins/group/flannel/HairPin 0.24
388 TestNetworkPlugins/group/bridge/Start 74.63
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
390 TestNetworkPlugins/group/bridge/NetCatPod 10.29
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.14
393 TestNetworkPlugins/group/bridge/HairPin 0.17
x
+
TestDownloadOnly/v1.20.0/json-events (12.62s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-815538 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-815538 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (12.620653467s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.62s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-815538
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-815538: exit status 85 (118.48054ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-815538 | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC |          |
	|         | -p download-only-815538        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:21:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:21:32.505099 1459854 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:21:32.505319 1459854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:21:32.505347 1459854 out.go:358] Setting ErrFile to fd 2...
	I0914 00:21:32.505369 1459854 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:21:32.505658 1459854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
	W0914 00:21:32.505834 1459854 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19640-1454467/.minikube/config/config.json: open /home/jenkins/minikube-integration/19640-1454467/.minikube/config/config.json: no such file or directory
	I0914 00:21:32.506293 1459854 out.go:352] Setting JSON to true
	I0914 00:21:32.507193 1459854 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29040,"bootTime":1726244253,"procs":154,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 00:21:32.507309 1459854 start.go:139] virtualization:  
	I0914 00:21:32.511666 1459854 out.go:97] [download-only-815538] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0914 00:21:32.511917 1459854 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 00:21:32.511963 1459854 notify.go:220] Checking for updates...
	I0914 00:21:32.514279 1459854 out.go:169] MINIKUBE_LOCATION=19640
	I0914 00:21:32.516887 1459854 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:21:32.519042 1459854 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 00:21:32.521459 1459854 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	I0914 00:21:32.523270 1459854 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0914 00:21:32.527464 1459854 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 00:21:32.527752 1459854 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:21:32.553655 1459854 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 00:21:32.553794 1459854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:21:32.614702 1459854 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 00:21:32.604638577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:21:32.614813 1459854 docker.go:318] overlay module found
	I0914 00:21:32.617276 1459854 out.go:97] Using the docker driver based on user configuration
	I0914 00:21:32.617303 1459854 start.go:297] selected driver: docker
	I0914 00:21:32.617311 1459854 start.go:901] validating driver "docker" against <nil>
	I0914 00:21:32.617425 1459854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:21:32.677057 1459854 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 00:21:32.666911329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:21:32.677280 1459854 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:21:32.677548 1459854 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0914 00:21:32.677715 1459854 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 00:21:32.679600 1459854 out.go:169] Using Docker driver with root privileges
	I0914 00:21:32.681606 1459854 cni.go:84] Creating CNI manager for ""
	I0914 00:21:32.681688 1459854 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 00:21:32.681702 1459854 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 00:21:32.681790 1459854 start.go:340] cluster config:
	{Name:download-only-815538 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-815538 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:21:32.683912 1459854 out.go:97] Starting "download-only-815538" primary control-plane node in "download-only-815538" cluster
	I0914 00:21:32.683947 1459854 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0914 00:21:32.686199 1459854 out.go:97] Pulling base image v0.0.45-1726243947-19640 ...
	I0914 00:21:32.686234 1459854 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0914 00:21:32.686333 1459854 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 00:21:32.702509 1459854 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 00:21:32.702717 1459854 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 00:21:32.702822 1459854 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 00:21:32.754640 1459854 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0914 00:21:32.754664 1459854 cache.go:56] Caching tarball of preloaded images
	I0914 00:21:32.754852 1459854 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0914 00:21:32.756985 1459854 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0914 00:21:32.757018 1459854 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0914 00:21:32.852099 1459854 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0914 00:21:37.461749 1459854 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0914 00:21:37.461845 1459854 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0914 00:21:38.559326 1459854 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0914 00:21:38.559779 1459854 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/download-only-815538/config.json ...
	I0914 00:21:38.559816 1459854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/download-only-815538/config.json: {Name:mkce8deebabb014e1634faab47b223fe6903af7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:21:38.560019 1459854 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0914 00:21:38.560737 1459854 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-815538 host does not exist
	  To start a cluster, run: "minikube start -p download-only-815538"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-815538
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.54s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-512994 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-512994 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.53838501s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.54s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-512994
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-512994: exit status 85 (78.562467ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-815538 | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC |                     |
	|         | -p download-only-815538        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC | 14 Sep 24 00:21 UTC |
	| delete  | -p download-only-815538        | download-only-815538 | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC | 14 Sep 24 00:21 UTC |
	| start   | -o=json --download-only        | download-only-512994 | jenkins | v1.34.0 | 14 Sep 24 00:21 UTC |                     |
	|         | -p download-only-512994        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 00:21:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 00:21:45.623027 1460052 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:21:45.623262 1460052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:21:45.623280 1460052 out.go:358] Setting ErrFile to fd 2...
	I0914 00:21:45.623285 1460052 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:21:45.623527 1460052 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
	I0914 00:21:45.623967 1460052 out.go:352] Setting JSON to true
	I0914 00:21:45.624780 1460052 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29053,"bootTime":1726244253,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 00:21:45.624848 1460052 start.go:139] virtualization:  
	I0914 00:21:45.628090 1460052 out.go:97] [download-only-512994] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 00:21:45.628333 1460052 notify.go:220] Checking for updates...
	I0914 00:21:45.630861 1460052 out.go:169] MINIKUBE_LOCATION=19640
	I0914 00:21:45.633344 1460052 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:21:45.635903 1460052 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 00:21:45.638461 1460052 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	I0914 00:21:45.641424 1460052 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0914 00:21:45.646267 1460052 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 00:21:45.646543 1460052 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:21:45.677928 1460052 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 00:21:45.678050 1460052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:21:45.729220 1460052 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 00:21:45.719916123 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:21:45.729333 1460052 docker.go:318] overlay module found
	I0914 00:21:45.732119 1460052 out.go:97] Using the docker driver based on user configuration
	I0914 00:21:45.732150 1460052 start.go:297] selected driver: docker
	I0914 00:21:45.732158 1460052 start.go:901] validating driver "docker" against <nil>
	I0914 00:21:45.732270 1460052 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:21:45.786080 1460052 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 00:21:45.776658514 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:21:45.786306 1460052 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 00:21:45.786607 1460052 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0914 00:21:45.786760 1460052 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 00:21:45.789645 1460052 out.go:169] Using Docker driver with root privileges
	I0914 00:21:45.792261 1460052 cni.go:84] Creating CNI manager for ""
	I0914 00:21:45.792328 1460052 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 00:21:45.792341 1460052 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 00:21:45.792422 1460052 start.go:340] cluster config:
	{Name:download-only-512994 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-512994 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:21:45.795028 1460052 out.go:97] Starting "download-only-512994" primary control-plane node in "download-only-512994" cluster
	I0914 00:21:45.795063 1460052 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0914 00:21:45.797653 1460052 out.go:97] Pulling base image v0.0.45-1726243947-19640 ...
	I0914 00:21:45.797693 1460052 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 00:21:45.797725 1460052 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0914 00:21:45.813068 1460052 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0914 00:21:45.813182 1460052 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0914 00:21:45.813226 1460052 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0914 00:21:45.813273 1460052 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0914 00:21:45.813280 1460052 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0914 00:21:45.859729 1460052 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0914 00:21:45.859754 1460052 cache.go:56] Caching tarball of preloaded images
	I0914 00:21:45.860270 1460052 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 00:21:45.863057 1460052 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0914 00:21:45.863081 1460052 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0914 00:21:45.945406 1460052 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0914 00:21:49.504373 1460052 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0914 00:21:49.504471 1460052 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0914 00:21:50.360999 1460052 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0914 00:21:50.361413 1460052 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/download-only-512994/config.json ...
	I0914 00:21:50.361447 1460052 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/download-only-512994/config.json: {Name:mk393873687ce2aebe9bc22bebbb78f33561fd9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 00:21:50.361626 1460052 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 00:21:50.361793 1460052 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19640-1454467/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-512994 host does not exist
	  To start a cluster, run: "minikube start -p download-only-512994"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-512994
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-987653 --alsologtostderr --binary-mirror http://127.0.0.1:34241 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-987653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-987653
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-131319
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-131319: exit status 85 (73.321436ms)

                                                
                                                
-- stdout --
	* Profile "addons-131319" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-131319"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-131319
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-131319: exit status 85 (67.521136ms)

                                                
                                                
-- stdout --
	* Profile "addons-131319" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-131319"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (216.67s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-131319 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-131319 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m36.672261054s)
--- PASS: TestAddons/Setup (216.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-131319 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-131319 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.42s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.410938ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-xcfqw" [2e57b878-f2f9-4d80-a055-4ca334d60419] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004650827s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-thrrk" [505e6aed-b3d3-4b91-aaf7-d36064f56137] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004680804s
addons_test.go:342: (dbg) Run:  kubectl --context addons-131319 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-131319 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-131319 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.241940337s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 ip
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-131319 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-131319 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-131319 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [4b69785f-93d2-49fd-8256-8e094529d43d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [4b69785f-93d2-49fd-8256-8e094529d43d] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.047804635s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-131319 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-131319 addons disable ingress-dns --alsologtostderr -v=1: (1.242031334s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-131319 addons disable ingress --alsologtostderr -v=1: (8.11792914s)
--- PASS: TestAddons/parallel/Ingress (20.40s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-b9bjg" [ae425b35-7f17-4690-9b72-1a07b89405a6] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004724663s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-131319
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-131319: (5.970334362s)
--- PASS: TestAddons/parallel/InspektorGadget (10.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.116207ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-ltmw6" [8091786a-c4c2-4358-bc15-288b7232a51f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004739347s
addons_test.go:417: (dbg) Run:  kubectl --context addons-131319 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (53.83s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.650669ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-131319 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-131319 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a63e16f6-45af-4ee8-947f-58f09f1a01eb] Pending
helpers_test.go:344: "task-pv-pod" [a63e16f6-45af-4ee8-947f-58f09f1a01eb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
2024/09/14 00:29:26 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:344: "task-pv-pod" [a63e16f6-45af-4ee8-947f-58f09f1a01eb] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004125855s
addons_test.go:590: (dbg) Run:  kubectl --context addons-131319 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-131319 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-131319 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-131319 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-131319 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-131319 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-131319 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [88203e9d-9591-428f-b0db-3b3df98a394f] Pending
helpers_test.go:344: "task-pv-pod-restore" [88203e9d-9591-428f-b0db-3b3df98a394f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [88203e9d-9591-428f-b0db-3b3df98a394f] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003896818s
addons_test.go:632: (dbg) Run:  kubectl --context addons-131319 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-131319 delete pod task-pv-pod-restore: (1.415511604s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-131319 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-131319 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-131319 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.963964784s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-131319 addons disable volumesnapshots --alsologtostderr -v=1: (1.005234236s)
--- PASS: TestAddons/parallel/CSI (53.83s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-131319 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-131319 --alsologtostderr -v=1: (1.158827396s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-zw7d6" [5131b10b-739c-447e-9bd2-6fed054b3f7f] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-zw7d6" [5131b10b-739c-447e-9bd2-6fed054b3f7f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-zw7d6" [5131b10b-739c-447e-9bd2-6fed054b3f7f] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004509678s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-131319 addons disable headlamp --alsologtostderr -v=1: (5.75364047s)
--- PASS: TestAddons/parallel/Headlamp (15.92s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-7qcsk" [8a777a02-9612-4f1f-9a5f-9d44e6b61c17] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004576647s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-131319
--- PASS: TestAddons/parallel/CloudSpanner (5.82s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-131319 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-131319 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-131319 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [01ca88d1-a18b-4884-8950-7ba745044b9e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [01ca88d1-a18b-4884-8950-7ba745044b9e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [01ca88d1-a18b-4884-8950-7ba745044b9e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003663199s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-131319 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 ssh "cat /opt/local-path-provisioner/pvc-911bc414-c965-49b8-a419-275b37cf35e7_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-131319 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-131319 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-88zhs" [72d14544-2ba6-426a-8bed-2ae9afb79959] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003849508s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-131319
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.63s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-fn4qh" [026b37ce-f42b-4130-a538-2e0268373e80] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.014563713s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-131319 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-131319 addons disable yakd --alsologtostderr -v=1: (5.843671468s)
--- PASS: TestAddons/parallel/Yakd (10.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-131319
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-131319: (12.052245247s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-131319
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-131319
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-131319
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (39.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-229583 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-229583 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.368141822s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-229583 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-229583 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-229583 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-229583" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-229583
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-229583: (2.041769653s)
--- PASS: TestCertOptions (39.06s)

                                                
                                    
x
+
TestCertExpiration (226.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-547976 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-547976 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (37.655283297s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-547976 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-547976 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.802921956s)
helpers_test.go:175: Cleaning up "cert-expiration-547976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-547976
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-547976: (2.337470322s)
--- PASS: TestCertExpiration (226.80s)

                                                
                                    
x
+
TestForceSystemdFlag (43.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-627102 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-627102 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.826450195s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-627102 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-627102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-627102
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-627102: (2.39819039s)
--- PASS: TestForceSystemdFlag (43.60s)

                                                
                                    
x
+
TestForceSystemdEnv (51.45s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-094130 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-094130 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (46.37064537s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-094130 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-094130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-094130
E0914 01:08:32.799709 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-094130: (4.638700266s)
--- PASS: TestForceSystemdEnv (51.45s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.37s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-750866 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-750866 --driver=docker  --container-runtime=containerd: (30.709837148s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-750866"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-750866": (1.048094249s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-QmCsPA3cDvPC/agent.1478356" SSH_AGENT_PID="1478357" DOCKER_HOST=ssh://docker@127.0.0.1:34629 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-QmCsPA3cDvPC/agent.1478356" SSH_AGENT_PID="1478357" DOCKER_HOST=ssh://docker@127.0.0.1:34629 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-QmCsPA3cDvPC/agent.1478356" SSH_AGENT_PID="1478357" DOCKER_HOST=ssh://docker@127.0.0.1:34629 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.196388236s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-QmCsPA3cDvPC/agent.1478356" SSH_AGENT_PID="1478357" DOCKER_HOST=ssh://docker@127.0.0.1:34629 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-750866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-750866
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-750866: (1.980288354s)
--- PASS: TestDockerEnvContainerd (46.37s)

                                                
                                    
x
+
TestErrorSpam/setup (30.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-485384 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-485384 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-485384 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-485384 --driver=docker  --container-runtime=containerd: (30.878271492s)
--- PASS: TestErrorSpam/setup (30.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 status
--- PASS: TestErrorSpam/status (1.11s)

                                                
                                    
x
+
TestErrorSpam/pause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 pause
--- PASS: TestErrorSpam/pause (1.87s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (1.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 stop: (1.195602524s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-485384 --log_dir /tmp/nospam-485384 stop
--- PASS: TestErrorSpam/stop (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19640-1454467/.minikube/files/etc/test/nested/copy/1459848/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (77.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-089303 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-089303 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m17.804139295s)
--- PASS: TestFunctional/serial/StartWithProxy (77.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.18s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-089303 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-089303 --alsologtostderr -v=8: (6.171201138s)
functional_test.go:663: soft start took 6.175009423s for "functional-089303" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.18s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-089303 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-089303 cache add registry.k8s.io/pause:3.1: (1.582720302s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-089303 cache add registry.k8s.io/pause:3.3: (1.323425911s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-089303 cache add registry.k8s.io/pause:latest: (1.280809717s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-089303 /tmp/TestFunctionalserialCacheCmdcacheadd_local3962269193/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 cache add minikube-local-cache-test:functional-089303
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 cache delete minikube-local-cache-test:functional-089303
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-089303
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-089303 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.839546ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 kubectl -- --context functional-089303 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-089303 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.68s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-089303 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-089303 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.680912541s)
functional_test.go:761: restart took 42.681045004s for "functional-089303" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.68s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-089303 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-089303 logs: (1.70499381s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 logs --file /tmp/TestFunctionalserialLogsFileCmd3064147813/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-089303 logs --file /tmp/TestFunctionalserialLogsFileCmd3064147813/001/logs.txt: (1.753483781s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-089303 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-089303
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-089303: exit status 115 (618.071065ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31312 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-089303 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-089303 delete -f testdata/invalidsvc.yaml: (1.001192846s)
--- PASS: TestFunctional/serial/InvalidService (4.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-089303 config get cpus: exit status 14 (60.212125ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-089303 config get cpus: exit status 14 (71.766516ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-089303 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-089303 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1493270: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-089303 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-089303 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (186.806369ms)

                                                
                                                
-- stdout --
	* [functional-089303] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:35:19.902487 1492906 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:35:19.902680 1492906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:19.902692 1492906 out.go:358] Setting ErrFile to fd 2...
	I0914 00:35:19.902698 1492906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:19.902977 1492906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
	I0914 00:35:19.903392 1492906 out.go:352] Setting JSON to false
	I0914 00:35:19.904501 1492906 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29867,"bootTime":1726244253,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 00:35:19.904578 1492906 start.go:139] virtualization:  
	I0914 00:35:19.907153 1492906 out.go:177] * [functional-089303] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 00:35:19.910165 1492906 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:35:19.910272 1492906 notify.go:220] Checking for updates...
	I0914 00:35:19.913953 1492906 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:35:19.916038 1492906 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 00:35:19.918181 1492906 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	I0914 00:35:19.919960 1492906 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 00:35:19.921911 1492906 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:35:19.924922 1492906 config.go:182] Loaded profile config "functional-089303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 00:35:19.925643 1492906 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:35:19.946962 1492906 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 00:35:19.947099 1492906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:35:20.015786 1492906 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 00:35:19.998238357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:35:20.016309 1492906 docker.go:318] overlay module found
	I0914 00:35:20.018979 1492906 out.go:177] * Using the docker driver based on existing profile
	I0914 00:35:20.021032 1492906 start.go:297] selected driver: docker
	I0914 00:35:20.021076 1492906 start.go:901] validating driver "docker" against &{Name:functional-089303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-089303 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:35:20.021217 1492906 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:35:20.023539 1492906 out.go:201] 
	W0914 00:35:20.025412 1492906 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 00:35:20.027736 1492906 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-089303 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-089303 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-089303 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (279.099383ms)

                                                
                                                
-- stdout --
	* [functional-089303] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:35:19.656605 1492804 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:35:19.656795 1492804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:19.656822 1492804 out.go:358] Setting ErrFile to fd 2...
	I0914 00:35:19.656968 1492804 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:35:19.658409 1492804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
	I0914 00:35:19.658888 1492804 out.go:352] Setting JSON to false
	I0914 00:35:19.659976 1492804 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":29867,"bootTime":1726244253,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 00:35:19.660078 1492804 start.go:139] virtualization:  
	I0914 00:35:19.664426 1492804 out.go:177] * [functional-089303] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0914 00:35:19.666909 1492804 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 00:35:19.666978 1492804 notify.go:220] Checking for updates...
	I0914 00:35:19.673107 1492804 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 00:35:19.674920 1492804 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 00:35:19.677294 1492804 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	I0914 00:35:19.679089 1492804 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 00:35:19.681155 1492804 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 00:35:19.683702 1492804 config.go:182] Loaded profile config "functional-089303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 00:35:19.684314 1492804 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 00:35:19.730801 1492804 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 00:35:19.730976 1492804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:35:19.833440 1492804 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 00:35:19.819970755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:35:19.833557 1492804 docker.go:318] overlay module found
	I0914 00:35:19.835980 1492804 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0914 00:35:19.838065 1492804 start.go:297] selected driver: docker
	I0914 00:35:19.838087 1492804 start.go:901] validating driver "docker" against &{Name:functional-089303 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-089303 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 00:35:19.838218 1492804 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 00:35:19.840823 1492804 out.go:201] 
	W0914 00:35:19.842925 1492804 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 00:35:19.845111 1492804 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-089303 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-089303 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-h7xjf" [f4c25a5c-07f7-480e-b6e4-8afa705b6d01] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-h7xjf" [f4c25a5c-07f7-480e-b6e4-8afa705b6d01] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003101679s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31844
functional_test.go:1675: http://192.168.49.2:31844: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-h7xjf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31844
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [11133d75-3f6c-45b5-a299-a94d17b83c6c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0040724s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-089303 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-089303 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-089303 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-089303 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e5ac80e8-194c-4dc9-9c81-c4990a2377fc] Pending
helpers_test.go:344: "sp-pod" [e5ac80e8-194c-4dc9-9c81-c4990a2377fc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e5ac80e8-194c-4dc9-9c81-c4990a2377fc] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004119856s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-089303 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-089303 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-089303 delete -f testdata/storage-provisioner/pod.yaml: (1.113351077s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-089303 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [169c127f-cef8-45b9-84db-4aeee3be5586] Pending
helpers_test.go:344: "sp-pod" [169c127f-cef8-45b9-84db-4aeee3be5586] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00382097s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-089303 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh -n functional-089303 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 cp functional-089303:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1614410727/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh -n functional-089303 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh -n functional-089303 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.26s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1459848/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "sudo cat /etc/test/nested/copy/1459848/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1459848.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "sudo cat /etc/ssl/certs/1459848.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1459848.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "sudo cat /usr/share/ca-certificates/1459848.pem"
E0914 00:35:29.730283 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:35:29.737697 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:35:29.749842 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "sudo cat /etc/ssl/certs/51391683.0"
E0914 00:35:29.771332 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:35:29.812981 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:35:29.897146 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:35:30.058578 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/14598482.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "sudo cat /etc/ssl/certs/14598482.pem"
E0914 00:35:30.380265 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/14598482.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "sudo cat /usr/share/ca-certificates/14598482.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
E0914 00:35:31.022022 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/CertSync (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-089303 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-089303 ssh "sudo systemctl is-active docker": exit status 1 (375.093572ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-089303 ssh "sudo systemctl is-active crio": exit status 1 (381.965049ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-089303 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-089303 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-089303 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-089303 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1490533: os: process already finished
helpers_test.go:502: unable to terminate pid 1490350: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-089303 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-089303 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [5aad368b-54be-4090-95bf-3b19ec0de830] Pending
helpers_test.go:344: "nginx-svc" [5aad368b-54be-4090-95bf-3b19ec0de830] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [5aad368b-54be-4090-95bf-3b19ec0de830] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004280144s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-089303 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.30.226 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-089303 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-089303 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-089303 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-mnwmz" [41be1c75-cbd7-4b1a-b5da-ea20149d6b3d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-mnwmz" [41be1c75-cbd7-4b1a-b5da-ea20149d6b3d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.00379635s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "308.932563ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "60.177945ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "357.404627ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "63.423995ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-089303 /tmp/TestFunctionalparallelMountCmdany-port1984348832/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726274115162709570" to /tmp/TestFunctionalparallelMountCmdany-port1984348832/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726274115162709570" to /tmp/TestFunctionalparallelMountCmdany-port1984348832/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726274115162709570" to /tmp/TestFunctionalparallelMountCmdany-port1984348832/001/test-1726274115162709570
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-089303 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (327.836928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 14 00:35 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 14 00:35 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 14 00:35 test-1726274115162709570
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh cat /mount-9p/test-1726274115162709570
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-089303 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [041c8faa-b7db-47ae-9ec3-993d34df9ebe] Pending
helpers_test.go:344: "busybox-mount" [041c8faa-b7db-47ae-9ec3-993d34df9ebe] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [041c8faa-b7db-47ae-9ec3-993d34df9ebe] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [041c8faa-b7db-47ae-9ec3-993d34df9ebe] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005148211s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-089303 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-089303 /tmp/TestFunctionalparallelMountCmdany-port1984348832/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 service list -o json
functional_test.go:1494: Took "576.176248ms" to run "out/minikube-linux-arm64 -p functional-089303 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31252
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31252
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-089303 /tmp/TestFunctionalparallelMountCmdspecific-port1023025511/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-089303 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (405.159433ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-089303 /tmp/TestFunctionalparallelMountCmdspecific-port1023025511/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-089303 ssh "sudo umount -f /mount-9p": exit status 1 (327.937267ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-089303 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-089303 /tmp/TestFunctionalparallelMountCmdspecific-port1023025511/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-089303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2880769414/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-089303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2880769414/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-089303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2880769414/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-089303 ssh "findmnt -T" /mount1: (1.012699479s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-089303 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-089303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2880769414/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-089303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2880769414/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-089303 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2880769414/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-089303 version -o=json --components: (1.266138246s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-089303 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-089303
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-089303
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-089303 image ls --format short --alsologtostderr:
I0914 00:35:35.524090 1495700 out.go:345] Setting OutFile to fd 1 ...
I0914 00:35:35.524259 1495700 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:35:35.524271 1495700 out.go:358] Setting ErrFile to fd 2...
I0914 00:35:35.524277 1495700 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:35:35.524529 1495700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
I0914 00:35:35.525180 1495700 config.go:182] Loaded profile config "functional-089303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 00:35:35.525302 1495700 config.go:182] Loaded profile config "functional-089303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 00:35:35.525780 1495700 cli_runner.go:164] Run: docker container inspect functional-089303 --format={{.State.Status}}
I0914 00:35:35.546413 1495700 ssh_runner.go:195] Run: systemctl --version
I0914 00:35:35.546470 1495700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-089303
I0914 00:35:35.574220 1495700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/functional-089303/id_rsa Username:docker}
I0914 00:35:35.669875 1495700 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-089303 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| docker.io/library/minikube-local-cache-test | functional-089303  | sha256:1447d2 | 991B   |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kicbase/echo-server               | functional-089303  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-089303 image ls --format table --alsologtostderr:
I0914 00:35:36.135774 1495853 out.go:345] Setting OutFile to fd 1 ...
I0914 00:35:36.135922 1495853 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:35:36.135928 1495853 out.go:358] Setting ErrFile to fd 2...
I0914 00:35:36.135933 1495853 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:35:36.136178 1495853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
I0914 00:35:36.137038 1495853 config.go:182] Loaded profile config "functional-089303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 00:35:36.137167 1495853 config.go:182] Loaded profile config "functional-089303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 00:35:36.137700 1495853 cli_runner.go:164] Run: docker container inspect functional-089303 --format={{.State.Status}}
I0914 00:35:36.170880 1495853 ssh_runner.go:195] Run: systemctl --version
I0914 00:35:36.170935 1495853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-089303
I0914 00:35:36.199269 1495853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/functional-089303/id_rsa Username:docker}
I0914 00:35:36.297067 1495853 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-089303 image ls --format json --alsologtostderr:
[{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:27e3830e1402783674d8b594038967deea9
d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:afb61768ce381961ca0beff9
5337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:7f8aa37
8bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/
kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:1447d27c04c4bc45cf8f6179b62ef3e392fe95c47a2443305c8aeb7f015ad238","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-089303"],"size":"991"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-089303"],"size":"2173567"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-089303 image ls --format json --alsologtostderr:
I0914 00:35:35.823590 1495767 out.go:345] Setting OutFile to fd 1 ...
I0914 00:35:35.823747 1495767 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:35:35.823759 1495767 out.go:358] Setting ErrFile to fd 2...
I0914 00:35:35.823765 1495767 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:35:35.824104 1495767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
I0914 00:35:35.824778 1495767 config.go:182] Loaded profile config "functional-089303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 00:35:35.824895 1495767 config.go:182] Loaded profile config "functional-089303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 00:35:35.825505 1495767 cli_runner.go:164] Run: docker container inspect functional-089303 --format={{.State.Status}}
I0914 00:35:35.858327 1495767 ssh_runner.go:195] Run: systemctl --version
I0914 00:35:35.858429 1495767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-089303
I0914 00:35:35.901150 1495767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/functional-089303/id_rsa Username:docker}
I0914 00:35:35.997063 1495767 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-089303 image ls --format yaml --alsologtostderr:
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-089303
size: "2173567"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:1447d27c04c4bc45cf8f6179b62ef3e392fe95c47a2443305c8aeb7f015ad238
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-089303
size: "991"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-089303 image ls --format yaml --alsologtostderr:
I0914 00:35:35.532360 1495699 out.go:345] Setting OutFile to fd 1 ...
I0914 00:35:35.532731 1495699 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:35:35.532739 1495699 out.go:358] Setting ErrFile to fd 2...
I0914 00:35:35.532745 1495699 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:35:35.532998 1495699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
I0914 00:35:35.533620 1495699 config.go:182] Loaded profile config "functional-089303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 00:35:35.533727 1495699 config.go:182] Loaded profile config "functional-089303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 00:35:35.534200 1495699 cli_runner.go:164] Run: docker container inspect functional-089303 --format={{.State.Status}}
I0914 00:35:35.556014 1495699 ssh_runner.go:195] Run: systemctl --version
I0914 00:35:35.556172 1495699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-089303
I0914 00:35:35.581746 1495699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/functional-089303/id_rsa Username:docker}
I0914 00:35:35.669880 1495699 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-089303 ssh pgrep buildkitd: exit status 1 (327.344274ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image build -t localhost/my-image:functional-089303 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-089303 image build -t localhost/my-image:functional-089303 testdata/build --alsologtostderr: (3.310515868s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-089303 image build -t localhost/my-image:functional-089303 testdata/build --alsologtostderr:
I0914 00:35:36.140915 1495852 out.go:345] Setting OutFile to fd 1 ...
I0914 00:35:36.142100 1495852 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:35:36.142122 1495852 out.go:358] Setting ErrFile to fd 2...
I0914 00:35:36.142128 1495852 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 00:35:36.142502 1495852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
I0914 00:35:36.143369 1495852 config.go:182] Loaded profile config "functional-089303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 00:35:36.144241 1495852 config.go:182] Loaded profile config "functional-089303": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 00:35:36.144862 1495852 cli_runner.go:164] Run: docker container inspect functional-089303 --format={{.State.Status}}
I0914 00:35:36.174943 1495852 ssh_runner.go:195] Run: systemctl --version
I0914 00:35:36.175111 1495852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-089303
I0914 00:35:36.196863 1495852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34639 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/functional-089303/id_rsa Username:docker}
I0914 00:35:36.288709 1495852 build_images.go:161] Building image from path: /tmp/build.1648120057.tar
I0914 00:35:36.288825 1495852 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0914 00:35:36.300820 1495852 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1648120057.tar
I0914 00:35:36.306015 1495852 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1648120057.tar: stat -c "%s %y" /var/lib/minikube/build/build.1648120057.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1648120057.tar': No such file or directory
I0914 00:35:36.306047 1495852 ssh_runner.go:362] scp /tmp/build.1648120057.tar --> /var/lib/minikube/build/build.1648120057.tar (3072 bytes)
I0914 00:35:36.343970 1495852 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1648120057
I0914 00:35:36.362691 1495852 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1648120057 -xf /var/lib/minikube/build/build.1648120057.tar
I0914 00:35:36.376193 1495852 containerd.go:394] Building image: /var/lib/minikube/build/build.1648120057
I0914 00:35:36.376303 1495852 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1648120057 --local dockerfile=/var/lib/minikube/build/build.1648120057 --output type=image,name=localhost/my-image:functional-089303
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:68d7906cd8df6866544c19db96a4e1594033bfa70e78454a24585dc8592eb086 0.0s done
#8 exporting config sha256:ff419f79b8e825d83e879aaaa1f15be2489e1f0d28bef61cada21533ae31cc32 0.0s done
#8 naming to localhost/my-image:functional-089303 done
#8 DONE 0.1s
I0914 00:35:39.338627 1495852 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1648120057 --local dockerfile=/var/lib/minikube/build/build.1648120057 --output type=image,name=localhost/my-image:functional-089303: (2.962290391s)
I0914 00:35:39.338707 1495852 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1648120057
I0914 00:35:39.350450 1495852 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1648120057.tar
I0914 00:35:39.360444 1495852 build_images.go:217] Built localhost/my-image:functional-089303 from /tmp/build.1648120057.tar
I0914 00:35:39.360477 1495852 build_images.go:133] succeeded building to: functional-089303
I0914 00:35:39.360482 1495852 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
2024/09/14 00:35:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-089303
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image load --daemon kicbase/echo-server:functional-089303 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-089303 image load --daemon kicbase/echo-server:functional-089303 --alsologtostderr: (1.292667671s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image load --daemon kicbase/echo-server:functional-089303 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-089303 image load --daemon kicbase/echo-server:functional-089303 --alsologtostderr: (1.126072138s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
E0914 00:35:32.304062 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-089303
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image load --daemon kicbase/echo-server:functional-089303 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-089303 image load --daemon kicbase/echo-server:functional-089303 --alsologtostderr: (1.025498104s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image save kicbase/echo-server:functional-089303 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image rm kicbase/echo-server:functional-089303 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image ls
E0914 00:35:34.865876 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-089303
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-089303 image save --daemon kicbase/echo-server:functional-089303 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-089303
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-089303
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-089303
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-089303
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (112.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-302402 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0914 00:35:50.228837 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:10.710229 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:51.671699 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-302402 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m52.128891464s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (112.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (34.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-302402 -- rollout status deployment/busybox: (31.221307972s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-2c82c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-49fzq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-rjx8l -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-2c82c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-49fzq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-rjx8l -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-2c82c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-49fzq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-rjx8l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (34.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-2c82c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-2c82c -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-49fzq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-49fzq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-rjx8l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-302402 -- exec busybox-7dff88458-rjx8l -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-302402 -v=7 --alsologtostderr
E0914 00:38:13.593043 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-302402 -v=7 --alsologtostderr: (21.666928312s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-302402 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp testdata/cp-test.txt ha-302402:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile198073647/001/cp-test_ha-302402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402:/home/docker/cp-test.txt ha-302402-m02:/home/docker/cp-test_ha-302402_ha-302402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m02 "sudo cat /home/docker/cp-test_ha-302402_ha-302402-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402:/home/docker/cp-test.txt ha-302402-m03:/home/docker/cp-test_ha-302402_ha-302402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m03 "sudo cat /home/docker/cp-test_ha-302402_ha-302402-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402:/home/docker/cp-test.txt ha-302402-m04:/home/docker/cp-test_ha-302402_ha-302402-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m04 "sudo cat /home/docker/cp-test_ha-302402_ha-302402-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp testdata/cp-test.txt ha-302402-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile198073647/001/cp-test_ha-302402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402-m02:/home/docker/cp-test.txt ha-302402:/home/docker/cp-test_ha-302402-m02_ha-302402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402 "sudo cat /home/docker/cp-test_ha-302402-m02_ha-302402.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402-m02:/home/docker/cp-test.txt ha-302402-m03:/home/docker/cp-test_ha-302402-m02_ha-302402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m03 "sudo cat /home/docker/cp-test_ha-302402-m02_ha-302402-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402-m02:/home/docker/cp-test.txt ha-302402-m04:/home/docker/cp-test_ha-302402-m02_ha-302402-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m04 "sudo cat /home/docker/cp-test_ha-302402-m02_ha-302402-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp testdata/cp-test.txt ha-302402-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile198073647/001/cp-test_ha-302402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402-m03:/home/docker/cp-test.txt ha-302402:/home/docker/cp-test_ha-302402-m03_ha-302402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402 "sudo cat /home/docker/cp-test_ha-302402-m03_ha-302402.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402-m03:/home/docker/cp-test.txt ha-302402-m02:/home/docker/cp-test_ha-302402-m03_ha-302402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m02 "sudo cat /home/docker/cp-test_ha-302402-m03_ha-302402-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402-m03:/home/docker/cp-test.txt ha-302402-m04:/home/docker/cp-test_ha-302402-m03_ha-302402-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m04 "sudo cat /home/docker/cp-test_ha-302402-m03_ha-302402-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp testdata/cp-test.txt ha-302402-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile198073647/001/cp-test_ha-302402-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402-m04:/home/docker/cp-test.txt ha-302402:/home/docker/cp-test_ha-302402-m04_ha-302402.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402 "sudo cat /home/docker/cp-test_ha-302402-m04_ha-302402.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402-m04:/home/docker/cp-test.txt ha-302402-m02:/home/docker/cp-test_ha-302402-m04_ha-302402-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m02 "sudo cat /home/docker/cp-test_ha-302402-m04_ha-302402-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 cp ha-302402-m04:/home/docker/cp-test.txt ha-302402-m03:/home/docker/cp-test_ha-302402-m04_ha-302402-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 ssh -n ha-302402-m03 "sudo cat /home/docker/cp-test_ha-302402-m04_ha-302402-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-302402 node stop m02 -v=7 --alsologtostderr: (12.11973113s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-302402 status -v=7 --alsologtostderr: exit status 7 (698.194602ms)

                                                
                                                
-- stdout --
	ha-302402
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-302402-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-302402-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-302402-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:39:05.943045 1511958 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:39:05.943207 1511958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:39:05.943219 1511958 out.go:358] Setting ErrFile to fd 2...
	I0914 00:39:05.943225 1511958 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:39:05.943470 1511958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
	I0914 00:39:05.943664 1511958 out.go:352] Setting JSON to false
	I0914 00:39:05.943702 1511958 mustload.go:65] Loading cluster: ha-302402
	I0914 00:39:05.943778 1511958 notify.go:220] Checking for updates...
	I0914 00:39:05.944188 1511958 config.go:182] Loaded profile config "ha-302402": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 00:39:05.944209 1511958 status.go:255] checking status of ha-302402 ...
	I0914 00:39:05.944861 1511958 cli_runner.go:164] Run: docker container inspect ha-302402 --format={{.State.Status}}
	I0914 00:39:05.965442 1511958 status.go:330] ha-302402 host status = "Running" (err=<nil>)
	I0914 00:39:05.965472 1511958 host.go:66] Checking if "ha-302402" exists ...
	I0914 00:39:05.965767 1511958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-302402
	I0914 00:39:05.989644 1511958 host.go:66] Checking if "ha-302402" exists ...
	I0914 00:39:05.989943 1511958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:39:05.990084 1511958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-302402
	I0914 00:39:06.016358 1511958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34644 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/ha-302402/id_rsa Username:docker}
	I0914 00:39:06.113508 1511958 ssh_runner.go:195] Run: systemctl --version
	I0914 00:39:06.118268 1511958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:39:06.130877 1511958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:39:06.187635 1511958 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-14 00:39:06.176898983 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:39:06.188305 1511958 kubeconfig.go:125] found "ha-302402" server: "https://192.168.49.254:8443"
	I0914 00:39:06.188343 1511958 api_server.go:166] Checking apiserver status ...
	I0914 00:39:06.188397 1511958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:39:06.200696 1511958 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	I0914 00:39:06.210751 1511958 api_server.go:182] apiserver freezer: "12:freezer:/docker/add9e25ee91ff15cd0f5bd5b157e7a45709427ebc36bbf9c739e3ff3b7d94951/kubepods/burstable/pod2a860624d874cee156c603b98d7777de/8b8676af344d24b3d90f89f86cdbecbeecc14b5366eeec63101b648a4fef1342"
	I0914 00:39:06.210836 1511958 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/add9e25ee91ff15cd0f5bd5b157e7a45709427ebc36bbf9c739e3ff3b7d94951/kubepods/burstable/pod2a860624d874cee156c603b98d7777de/8b8676af344d24b3d90f89f86cdbecbeecc14b5366eeec63101b648a4fef1342/freezer.state
	I0914 00:39:06.220252 1511958 api_server.go:204] freezer state: "THAWED"
	I0914 00:39:06.220345 1511958 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0914 00:39:06.229837 1511958 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0914 00:39:06.229869 1511958 status.go:422] ha-302402 apiserver status = Running (err=<nil>)
	I0914 00:39:06.229881 1511958 status.go:257] ha-302402 status: &{Name:ha-302402 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:39:06.229934 1511958 status.go:255] checking status of ha-302402-m02 ...
	I0914 00:39:06.230278 1511958 cli_runner.go:164] Run: docker container inspect ha-302402-m02 --format={{.State.Status}}
	I0914 00:39:06.248153 1511958 status.go:330] ha-302402-m02 host status = "Stopped" (err=<nil>)
	I0914 00:39:06.248179 1511958 status.go:343] host is not running, skipping remaining checks
	I0914 00:39:06.248187 1511958 status.go:257] ha-302402-m02 status: &{Name:ha-302402-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:39:06.248217 1511958 status.go:255] checking status of ha-302402-m03 ...
	I0914 00:39:06.248539 1511958 cli_runner.go:164] Run: docker container inspect ha-302402-m03 --format={{.State.Status}}
	I0914 00:39:06.265810 1511958 status.go:330] ha-302402-m03 host status = "Running" (err=<nil>)
	I0914 00:39:06.265834 1511958 host.go:66] Checking if "ha-302402-m03" exists ...
	I0914 00:39:06.266126 1511958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-302402-m03
	I0914 00:39:06.282950 1511958 host.go:66] Checking if "ha-302402-m03" exists ...
	I0914 00:39:06.283283 1511958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:39:06.283328 1511958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-302402-m03
	I0914 00:39:06.300311 1511958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34654 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/ha-302402-m03/id_rsa Username:docker}
	I0914 00:39:06.385191 1511958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:39:06.398151 1511958 kubeconfig.go:125] found "ha-302402" server: "https://192.168.49.254:8443"
	I0914 00:39:06.398194 1511958 api_server.go:166] Checking apiserver status ...
	I0914 00:39:06.398238 1511958 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:39:06.410039 1511958 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1328/cgroup
	I0914 00:39:06.419694 1511958 api_server.go:182] apiserver freezer: "12:freezer:/docker/9ba7c4679c7a9bad45bc08ccb0dad3b1fdc934d21d1d7a6792abacbf25a56a1a/kubepods/burstable/pode0dbccaa4fc63236ec064fb6ad4433eb/8920789fd9dfd19fcc6817335decb583e965bf74292caf450e645e87645350ad"
	I0914 00:39:06.419775 1511958 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9ba7c4679c7a9bad45bc08ccb0dad3b1fdc934d21d1d7a6792abacbf25a56a1a/kubepods/burstable/pode0dbccaa4fc63236ec064fb6ad4433eb/8920789fd9dfd19fcc6817335decb583e965bf74292caf450e645e87645350ad/freezer.state
	I0914 00:39:06.428447 1511958 api_server.go:204] freezer state: "THAWED"
	I0914 00:39:06.428520 1511958 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0914 00:39:06.436386 1511958 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0914 00:39:06.436413 1511958 status.go:422] ha-302402-m03 apiserver status = Running (err=<nil>)
	I0914 00:39:06.436423 1511958 status.go:257] ha-302402-m03 status: &{Name:ha-302402-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:39:06.436440 1511958 status.go:255] checking status of ha-302402-m04 ...
	I0914 00:39:06.436756 1511958 cli_runner.go:164] Run: docker container inspect ha-302402-m04 --format={{.State.Status}}
	I0914 00:39:06.454016 1511958 status.go:330] ha-302402-m04 host status = "Running" (err=<nil>)
	I0914 00:39:06.454041 1511958 host.go:66] Checking if "ha-302402-m04" exists ...
	I0914 00:39:06.454348 1511958 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-302402-m04
	I0914 00:39:06.471498 1511958 host.go:66] Checking if "ha-302402-m04" exists ...
	I0914 00:39:06.471805 1511958 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:39:06.471919 1511958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-302402-m04
	I0914 00:39:06.493516 1511958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34659 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/ha-302402-m04/id_rsa Username:docker}
	I0914 00:39:06.580774 1511958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:39:06.592192 1511958 status.go:257] ha-302402-m04 status: &{Name:ha-302402-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-302402 node start m02 -v=7 --alsologtostderr: (17.804221052s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-302402 status -v=7 --alsologtostderr: (1.099105671s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-302402 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-302402 -v=7 --alsologtostderr
E0914 00:39:50.291691 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:50.298053 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:50.309384 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:50.330739 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:50.372048 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:50.453333 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:50.614645 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:50.936176 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:51.578153 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:52.860324 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:55.421818 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:00.544176 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-302402 -v=7 --alsologtostderr: (37.603204149s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-302402 --wait=true -v=7 --alsologtostderr
E0914 00:40:10.787050 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:29.729922 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:31.268490 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:57.435068 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:41:12.229900 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-302402 --wait=true -v=7 --alsologtostderr: (1m36.328925314s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-302402
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-302402 node delete m03 -v=7 --alsologtostderr: (9.618042557s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-302402 stop -v=7 --alsologtostderr: (35.883711841s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-302402 status -v=7 --alsologtostderr: exit status 7 (118.402344ms)

                                                
                                                
-- stdout --
	ha-302402
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-302402-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-302402-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:42:27.979819 1526209 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:42:27.980346 1526209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:42:27.980357 1526209 out.go:358] Setting ErrFile to fd 2...
	I0914 00:42:27.980362 1526209 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:42:27.980618 1526209 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
	I0914 00:42:27.980818 1526209 out.go:352] Setting JSON to false
	I0914 00:42:27.980850 1526209 mustload.go:65] Loading cluster: ha-302402
	I0914 00:42:27.980957 1526209 notify.go:220] Checking for updates...
	I0914 00:42:27.981289 1526209 config.go:182] Loaded profile config "ha-302402": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 00:42:27.981300 1526209 status.go:255] checking status of ha-302402 ...
	I0914 00:42:27.982214 1526209 cli_runner.go:164] Run: docker container inspect ha-302402 --format={{.State.Status}}
	I0914 00:42:27.999950 1526209 status.go:330] ha-302402 host status = "Stopped" (err=<nil>)
	I0914 00:42:27.999971 1526209 status.go:343] host is not running, skipping remaining checks
	I0914 00:42:27.999978 1526209 status.go:257] ha-302402 status: &{Name:ha-302402 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:42:28.000008 1526209 status.go:255] checking status of ha-302402-m02 ...
	I0914 00:42:28.000321 1526209 cli_runner.go:164] Run: docker container inspect ha-302402-m02 --format={{.State.Status}}
	I0914 00:42:28.025274 1526209 status.go:330] ha-302402-m02 host status = "Stopped" (err=<nil>)
	I0914 00:42:28.025298 1526209 status.go:343] host is not running, skipping remaining checks
	I0914 00:42:28.025305 1526209 status.go:257] ha-302402-m02 status: &{Name:ha-302402-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:42:28.025328 1526209 status.go:255] checking status of ha-302402-m04 ...
	I0914 00:42:28.025642 1526209 cli_runner.go:164] Run: docker container inspect ha-302402-m04 --format={{.State.Status}}
	I0914 00:42:28.047815 1526209 status.go:330] ha-302402-m04 host status = "Stopped" (err=<nil>)
	I0914 00:42:28.047841 1526209 status.go:343] host is not running, skipping remaining checks
	I0914 00:42:28.047883 1526209 status.go:257] ha-302402-m04 status: &{Name:ha-302402-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (66.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-302402 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0914 00:42:34.151343 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-302402 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m5.828789459s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (66.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-302402 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-302402 --control-plane -v=7 --alsologtostderr: (42.1433818s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-302402 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-302402 status -v=7 --alsologtostderr: (1.015751348s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (90.87s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-733244 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0914 00:44:50.291146 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:45:17.994882 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:45:29.729501 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-733244 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m30.860376682s)
--- PASS: TestJSONOutput/start/Command (90.87s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-733244 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-733244 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-733244 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-733244 --output=json --user=testUser: (5.843187012s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-044503 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-044503 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.674327ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c7978270-d707-45bc-84d7-da1b8d4a9110","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-044503] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"044e93fe-876c-413d-b579-aa84b70bdaac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"073458f7-0e18-4d84-b474-8bfd5fd8aab9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"82d17072-8b07-4b10-8ef2-811f0207f8f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig"}}
	{"specversion":"1.0","id":"802b6c01-0801-4a82-8ec0-372a0e445ff0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube"}}
	{"specversion":"1.0","id":"0b81cc17-e3a6-4d1f-9bf2-c54b39562a9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"a133adfb-be30-4a0f-b6bc-a524d3876121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ac394559-e895-45dc-b94d-934fe43907bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-044503" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-044503
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (42.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-767508 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-767508 --network=: (39.990286365s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-767508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-767508
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-767508: (2.070842378s)
--- PASS: TestKicCustomNetwork/create_custom_network (42.09s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-061252 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-061252 --network=bridge: (29.149969249s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-061252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-061252
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-061252: (1.968188676s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.14s)

                                                
                                    
x
+
TestKicExistingNetwork (37.71s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-501648 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-501648 --network=existing-network: (35.630079669s)
helpers_test.go:175: Cleaning up "existing-network-501648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-501648
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-501648: (1.932498287s)
--- PASS: TestKicExistingNetwork (37.71s)

                                                
                                    
x
+
TestKicCustomSubnet (32.44s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-047390 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-047390 --subnet=192.168.60.0/24: (30.339514351s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-047390 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-047390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-047390
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-047390: (2.079063881s)
--- PASS: TestKicCustomSubnet (32.44s)

                                                
                                    
x
+
TestKicStaticIP (31.96s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-238370 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-238370 --static-ip=192.168.200.200: (29.724082625s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-238370 ip
helpers_test.go:175: Cleaning up "static-ip-238370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-238370
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-238370: (2.079560193s)
--- PASS: TestKicStaticIP (31.96s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (67.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-738413 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-738413 --driver=docker  --container-runtime=containerd: (28.910603484s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-741514 --driver=docker  --container-runtime=containerd
E0914 00:49:50.291583 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-741514 --driver=docker  --container-runtime=containerd: (32.820392716s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-738413
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-741514
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-741514" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-741514
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-741514: (2.062167184s)
helpers_test.go:175: Cleaning up "first-738413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-738413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-738413: (2.230122947s)
--- PASS: TestMinikubeProfile (67.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-864037 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-864037 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.629655534s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-864037 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-866014 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-866014 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.051706214s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-866014 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-864037 --alsologtostderr -v=5
E0914 00:50:29.729096 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-864037 --alsologtostderr -v=5: (1.641032052s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-866014 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-866014
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-866014: (1.203783409s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.61s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-866014
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-866014: (6.605890269s)
--- PASS: TestMountStart/serial/RestartStopped (7.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-866014 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-953857 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-953857 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.031973419s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.55s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- rollout status deployment/busybox
E0914 00:51:52.797515 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-953857 -- rollout status deployment/busybox: (15.91975744s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- exec busybox-7dff88458-55jz5 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- exec busybox-7dff88458-bfkdd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- exec busybox-7dff88458-55jz5 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- exec busybox-7dff88458-bfkdd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- exec busybox-7dff88458-55jz5 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- exec busybox-7dff88458-bfkdd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.87s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- exec busybox-7dff88458-55jz5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- exec busybox-7dff88458-55jz5 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- exec busybox-7dff88458-bfkdd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-953857 -- exec busybox-7dff88458-bfkdd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (19.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-953857 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-953857 -v 3 --alsologtostderr: (19.046668208s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (19.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-953857 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.31s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 cp testdata/cp-test.txt multinode-953857:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 cp multinode-953857:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3927350019/001/cp-test_multinode-953857.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 cp multinode-953857:/home/docker/cp-test.txt multinode-953857-m02:/home/docker/cp-test_multinode-953857_multinode-953857-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857-m02 "sudo cat /home/docker/cp-test_multinode-953857_multinode-953857-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 cp multinode-953857:/home/docker/cp-test.txt multinode-953857-m03:/home/docker/cp-test_multinode-953857_multinode-953857-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857-m03 "sudo cat /home/docker/cp-test_multinode-953857_multinode-953857-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 cp testdata/cp-test.txt multinode-953857-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 cp multinode-953857-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3927350019/001/cp-test_multinode-953857-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 cp multinode-953857-m02:/home/docker/cp-test.txt multinode-953857:/home/docker/cp-test_multinode-953857-m02_multinode-953857.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857 "sudo cat /home/docker/cp-test_multinode-953857-m02_multinode-953857.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 cp multinode-953857-m02:/home/docker/cp-test.txt multinode-953857-m03:/home/docker/cp-test_multinode-953857-m02_multinode-953857-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857-m03 "sudo cat /home/docker/cp-test_multinode-953857-m02_multinode-953857-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 cp testdata/cp-test.txt multinode-953857-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 cp multinode-953857-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3927350019/001/cp-test_multinode-953857-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 cp multinode-953857-m03:/home/docker/cp-test.txt multinode-953857:/home/docker/cp-test_multinode-953857-m03_multinode-953857.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857 "sudo cat /home/docker/cp-test_multinode-953857-m03_multinode-953857.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 cp multinode-953857-m03:/home/docker/cp-test.txt multinode-953857-m02:/home/docker/cp-test_multinode-953857-m03_multinode-953857-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 ssh -n multinode-953857-m02 "sudo cat /home/docker/cp-test_multinode-953857-m03_multinode-953857-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-953857 node stop m03: (1.202856433s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-953857 status: exit status 7 (523.278564ms)

                                                
                                                
-- stdout --
	multinode-953857
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-953857-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-953857-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-953857 status --alsologtostderr: exit status 7 (527.5211ms)

                                                
                                                
-- stdout --
	multinode-953857
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-953857-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-953857-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:52:36.389197 1579477 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:52:36.389353 1579477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:52:36.389363 1579477 out.go:358] Setting ErrFile to fd 2...
	I0914 00:52:36.389368 1579477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:52:36.389616 1579477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
	I0914 00:52:36.389855 1579477 out.go:352] Setting JSON to false
	I0914 00:52:36.389898 1579477 mustload.go:65] Loading cluster: multinode-953857
	I0914 00:52:36.389950 1579477 notify.go:220] Checking for updates...
	I0914 00:52:36.390337 1579477 config.go:182] Loaded profile config "multinode-953857": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 00:52:36.390356 1579477 status.go:255] checking status of multinode-953857 ...
	I0914 00:52:36.391388 1579477 cli_runner.go:164] Run: docker container inspect multinode-953857 --format={{.State.Status}}
	I0914 00:52:36.409072 1579477 status.go:330] multinode-953857 host status = "Running" (err=<nil>)
	I0914 00:52:36.409099 1579477 host.go:66] Checking if "multinode-953857" exists ...
	I0914 00:52:36.409417 1579477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-953857
	I0914 00:52:36.442650 1579477 host.go:66] Checking if "multinode-953857" exists ...
	I0914 00:52:36.442972 1579477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:52:36.443022 1579477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-953857
	I0914 00:52:36.460463 1579477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34764 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/multinode-953857/id_rsa Username:docker}
	I0914 00:52:36.549127 1579477 ssh_runner.go:195] Run: systemctl --version
	I0914 00:52:36.553569 1579477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:52:36.565140 1579477 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:52:36.620635 1579477 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-14 00:52:36.610562105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:52:36.621225 1579477 kubeconfig.go:125] found "multinode-953857" server: "https://192.168.67.2:8443"
	I0914 00:52:36.621264 1579477 api_server.go:166] Checking apiserver status ...
	I0914 00:52:36.621312 1579477 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:52:36.632324 1579477 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup
	I0914 00:52:36.641910 1579477 api_server.go:182] apiserver freezer: "12:freezer:/docker/1eadaf76d5b447cc07ae4a60f63633c4b4c6af5fc7f0f40058525885b2bb5c8d/kubepods/burstable/pode94065c4de2ef5c2a262e5e70b9dbf5a/7d831cbcfd77cc3761ec1d8ccb7c6104d3f1f32b63d1aaaa3eff076a5fcb6134"
	I0914 00:52:36.642002 1579477 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1eadaf76d5b447cc07ae4a60f63633c4b4c6af5fc7f0f40058525885b2bb5c8d/kubepods/burstable/pode94065c4de2ef5c2a262e5e70b9dbf5a/7d831cbcfd77cc3761ec1d8ccb7c6104d3f1f32b63d1aaaa3eff076a5fcb6134/freezer.state
	I0914 00:52:36.651069 1579477 api_server.go:204] freezer state: "THAWED"
	I0914 00:52:36.651114 1579477 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 00:52:36.658656 1579477 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0914 00:52:36.658686 1579477 status.go:422] multinode-953857 apiserver status = Running (err=<nil>)
	I0914 00:52:36.658698 1579477 status.go:257] multinode-953857 status: &{Name:multinode-953857 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:52:36.658730 1579477 status.go:255] checking status of multinode-953857-m02 ...
	I0914 00:52:36.659062 1579477 cli_runner.go:164] Run: docker container inspect multinode-953857-m02 --format={{.State.Status}}
	I0914 00:52:36.675798 1579477 status.go:330] multinode-953857-m02 host status = "Running" (err=<nil>)
	I0914 00:52:36.675829 1579477 host.go:66] Checking if "multinode-953857-m02" exists ...
	I0914 00:52:36.676174 1579477 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-953857-m02
	I0914 00:52:36.711060 1579477 host.go:66] Checking if "multinode-953857-m02" exists ...
	I0914 00:52:36.711380 1579477 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:52:36.711429 1579477 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-953857-m02
	I0914 00:52:36.727914 1579477 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34769 SSHKeyPath:/home/jenkins/minikube-integration/19640-1454467/.minikube/machines/multinode-953857-m02/id_rsa Username:docker}
	I0914 00:52:36.812987 1579477 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:52:36.834212 1579477 status.go:257] multinode-953857-m02 status: &{Name:multinode-953857-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:52:36.834247 1579477 status.go:255] checking status of multinode-953857-m03 ...
	I0914 00:52:36.834557 1579477 cli_runner.go:164] Run: docker container inspect multinode-953857-m03 --format={{.State.Status}}
	I0914 00:52:36.851231 1579477 status.go:330] multinode-953857-m03 host status = "Stopped" (err=<nil>)
	I0914 00:52:36.851255 1579477 status.go:343] host is not running, skipping remaining checks
	I0914 00:52:36.851262 1579477 status.go:257] multinode-953857-m03 status: &{Name:multinode-953857-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-953857 node start m03 -v=7 --alsologtostderr: (9.102782114s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.86s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (98.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-953857
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-953857
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-953857: (24.992281386s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-953857 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-953857 --wait=true -v=8 --alsologtostderr: (1m12.955558562s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-953857
--- PASS: TestMultiNode/serial/RestartKeepsNodes (98.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-953857 node delete m03: (4.812150586s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 stop
E0914 00:54:50.291275 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-953857 stop: (23.872468321s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-953857 status: exit status 7 (93.753186ms)

                                                
                                                
-- stdout --
	multinode-953857
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-953857-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-953857 status --alsologtostderr: exit status 7 (88.735071ms)

                                                
                                                
-- stdout --
	multinode-953857
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-953857-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:54:54.300732 1587970 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:54:54.300870 1587970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:54:54.300881 1587970 out.go:358] Setting ErrFile to fd 2...
	I0914 00:54:54.300886 1587970 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:54:54.301113 1587970 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
	I0914 00:54:54.301296 1587970 out.go:352] Setting JSON to false
	I0914 00:54:54.301330 1587970 mustload.go:65] Loading cluster: multinode-953857
	I0914 00:54:54.301430 1587970 notify.go:220] Checking for updates...
	I0914 00:54:54.301760 1587970 config.go:182] Loaded profile config "multinode-953857": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 00:54:54.301779 1587970 status.go:255] checking status of multinode-953857 ...
	I0914 00:54:54.302335 1587970 cli_runner.go:164] Run: docker container inspect multinode-953857 --format={{.State.Status}}
	I0914 00:54:54.320028 1587970 status.go:330] multinode-953857 host status = "Stopped" (err=<nil>)
	I0914 00:54:54.320051 1587970 status.go:343] host is not running, skipping remaining checks
	I0914 00:54:54.320060 1587970 status.go:257] multinode-953857 status: &{Name:multinode-953857 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:54:54.320093 1587970 status.go:255] checking status of multinode-953857-m02 ...
	I0914 00:54:54.320409 1587970 cli_runner.go:164] Run: docker container inspect multinode-953857-m02 --format={{.State.Status}}
	I0914 00:54:54.341685 1587970 status.go:330] multinode-953857-m02 host status = "Stopped" (err=<nil>)
	I0914 00:54:54.341704 1587970 status.go:343] host is not running, skipping remaining checks
	I0914 00:54:54.341712 1587970 status.go:257] multinode-953857-m02 status: &{Name:multinode-953857-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-953857 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0914 00:55:29.729081 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-953857 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.102551914s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-953857 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-953857
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-953857-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-953857-m02 --driver=docker  --container-runtime=containerd: exit status 14 (87.468493ms)

                                                
                                                
-- stdout --
	* [multinode-953857-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-953857-m02' is duplicated with machine name 'multinode-953857-m02' in profile 'multinode-953857'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-953857-m03 --driver=docker  --container-runtime=containerd
E0914 00:56:13.356282 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-953857-m03 --driver=docker  --container-runtime=containerd: (29.524811649s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-953857
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-953857: exit status 80 (336.748136ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-953857 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-953857-m03 already exists in multinode-953857-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-953857-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-953857-m03: (1.991500232s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.99s)

                                                
                                    
x
+
TestPreload (110.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-304587 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-304587 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m13.354469907s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-304587 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-304587 image pull gcr.io/k8s-minikube/busybox: (1.973690443s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-304587
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-304587: (12.085529933s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-304587 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-304587 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.260833731s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-304587 image list
helpers_test.go:175: Cleaning up "test-preload-304587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-304587
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-304587: (2.512372128s)
--- PASS: TestPreload (110.68s)

                                                
                                    
x
+
TestScheduledStopUnix (108.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-407086 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-407086 --memory=2048 --driver=docker  --container-runtime=containerd: (31.655248219s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-407086 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-407086 -n scheduled-stop-407086
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-407086 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-407086 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-407086 -n scheduled-stop-407086
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-407086
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-407086 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0914 00:59:50.291784 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-407086
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-407086: exit status 7 (68.753905ms)

                                                
                                                
-- stdout --
	scheduled-stop-407086
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-407086 -n scheduled-stop-407086
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-407086 -n scheduled-stop-407086: exit status 7 (67.967382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-407086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-407086
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-407086: (5.002233231s)
--- PASS: TestScheduledStopUnix (108.15s)

                                                
                                    
x
+
TestInsufficientStorage (13.43s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-756096 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-756096 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.970433977s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"81b15541-7b03-4a1c-aefd-6ccf39986ec0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-756096] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce71a483-9aa8-4564-ad4b-bd5775f5f48d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"74263fd4-e9ba-48ce-b208-37366ddda7ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6bfcb624-86ee-47fd-81e4-ad7b9822c9ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig"}}
	{"specversion":"1.0","id":"fb1da1ed-505b-4cac-8cf2-eab5f1b8dade","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube"}}
	{"specversion":"1.0","id":"175e2563-414b-44f4-b7b5-34f94576bd94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"58aa736c-3b89-41a8-9da1-89b21fc70fb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e0b8d06e-aa09-4cc5-aac4-aa50a0b1579b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ea0d7e1a-8dd9-42ba-bde2-43f56a6ebe0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"541526a2-ebc9-42d5-b08a-67e94061ad6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"bccc5cec-a9b7-474b-895b-dff38c8133d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4020db68-044e-4c52-b231-54ba4701e98a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-756096\" primary control-plane node in \"insufficient-storage-756096\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"2478d5ee-0377-4b56-84c5-f856610aecef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726243947-19640 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa5ecdd8-707a-4db4-a5b8-85b351e35eea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"665ead71-a36b-4da2-ac34-b74219a7fbed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-756096 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-756096 --output=json --layout=cluster: exit status 7 (298.329211ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-756096","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-756096","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 01:00:12.092162 1606595 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-756096" does not appear in /home/jenkins/minikube-integration/19640-1454467/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-756096 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-756096 --output=json --layout=cluster: exit status 7 (287.326142ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-756096","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-756096","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 01:00:12.380119 1606655 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-756096" does not appear in /home/jenkins/minikube-integration/19640-1454467/kubeconfig
	E0914 01:00:12.390485 1606655 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/insufficient-storage-756096/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-756096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-756096
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-756096: (1.87401353s)
--- PASS: TestInsufficientStorage (13.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (94.36s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3945460589 start -p running-upgrade-980949 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3945460589 start -p running-upgrade-980949 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.985104161s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-980949 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-980949 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.299850988s)
helpers_test.go:175: Cleaning up "running-upgrade-980949" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-980949
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-980949: (3.466023648s)
--- PASS: TestRunningBinaryUpgrade (94.36s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-582478 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-582478 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.392378353s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-582478
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-582478: (1.267951368s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-582478 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-582478 status --format={{.Host}}: exit status 7 (73.969067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-582478 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-582478 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m36.78135739s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-582478 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-582478 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-582478 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (192.214149ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-582478] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-582478
	    minikube start -p kubernetes-upgrade-582478 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5824782 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-582478 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-582478 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-582478 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.472850131s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-582478" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-582478
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-582478: (3.805828516s)
--- PASS: TestKubernetesUpgrade (354.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (180.2s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.866666912 start -p missing-upgrade-825995 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.866666912 start -p missing-upgrade-825995 --memory=2200 --driver=docker  --container-runtime=containerd: (1m39.574894079s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-825995
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-825995: (10.294354811s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-825995
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-825995 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-825995 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m7.088692805s)
helpers_test.go:175: Cleaning up "missing-upgrade-825995" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-825995
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-825995: (2.329414623s)
--- PASS: TestMissingContainerUpgrade (180.20s)

                                                
                                    
x
+
TestPause/serial/Start (62.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-764023 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-764023 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m2.515049096s)
--- PASS: TestPause/serial/Start (62.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-982180 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-982180 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (107.755346ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-982180] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-982180 --driver=docker  --container-runtime=containerd
E0914 01:00:29.729377 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-982180 --driver=docker  --container-runtime=containerd: (40.473081894s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-982180 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-982180 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-982180 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.391263551s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-982180 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-982180 status -o json: exit status 2 (307.196692ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-982180","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-982180
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-982180: (1.930708792s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-982180 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-982180 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.237541047s)
--- PASS: TestNoKubernetes/serial/Start (9.24s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.94s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-764023 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-764023 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.927398265s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-982180 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-982180 "sudo systemctl is-active --quiet service kubelet": exit status 1 (265.903395ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                    
x
+
TestPause/serial/Pause (0.9s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-764023 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-982180
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-982180: (1.269501738s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-764023 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-764023 --output=json --layout=cluster: exit status 2 (298.776588ms)

                                                
                                                
-- stdout --
	{"Name":"pause-764023","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-764023","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-764023 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-982180 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-982180 --driver=docker  --container-runtime=containerd: (7.20117116s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.20s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.17s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-764023 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-764023 --alsologtostderr -v=5: (1.169259218s)
--- PASS: TestPause/serial/PauseAgain (1.17s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.65s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-764023 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-764023 --alsologtostderr -v=5: (2.654201504s)
--- PASS: TestPause/serial/DeletePaused (2.65s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-764023
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-764023: exit status 1 (30.956522ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-764023: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-982180 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-982180 "sudo systemctl is-active --quiet service kubelet": exit status 1 (322.716168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (97.43s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.995478904 start -p stopped-upgrade-511883 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0914 01:04:50.291144 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.995478904 start -p stopped-upgrade-511883 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (42.804512416s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.995478904 -p stopped-upgrade-511883 stop
E0914 01:05:29.729472 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.995478904 -p stopped-upgrade-511883 stop: (19.924720341s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-511883 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-511883 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.703247366s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (97.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-511883
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-361936 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-361936 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (269.28091ms)

                                                
                                                
-- stdout --
	* [false-361936] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 01:07:34.146555 1644628 out.go:345] Setting OutFile to fd 1 ...
	I0914 01:07:34.147181 1644628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:07:34.147207 1644628 out.go:358] Setting ErrFile to fd 2...
	I0914 01:07:34.147227 1644628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 01:07:34.147525 1644628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-1454467/.minikube/bin
	I0914 01:07:34.148103 1644628 out.go:352] Setting JSON to false
	I0914 01:07:34.149223 1644628 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":31802,"bootTime":1726244253,"procs":227,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 01:07:34.149478 1644628 start.go:139] virtualization:  
	I0914 01:07:34.152969 1644628 out.go:177] * [false-361936] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 01:07:34.155685 1644628 out.go:177]   - MINIKUBE_LOCATION=19640
	I0914 01:07:34.155754 1644628 notify.go:220] Checking for updates...
	I0914 01:07:34.161107 1644628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 01:07:34.163914 1644628 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-1454467/kubeconfig
	I0914 01:07:34.166309 1644628 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-1454467/.minikube
	I0914 01:07:34.168623 1644628 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 01:07:34.170957 1644628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 01:07:34.173846 1644628 config.go:182] Loaded profile config "running-upgrade-980949": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0914 01:07:34.173937 1644628 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 01:07:34.216120 1644628 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 01:07:34.216287 1644628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 01:07:34.325178 1644628 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-14 01:07:34.314944521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 01:07:34.325289 1644628 docker.go:318] overlay module found
	I0914 01:07:34.328173 1644628 out.go:177] * Using the docker driver based on user configuration
	I0914 01:07:34.330569 1644628 start.go:297] selected driver: docker
	I0914 01:07:34.330592 1644628 start.go:901] validating driver "docker" against <nil>
	I0914 01:07:34.330606 1644628 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 01:07:34.333549 1644628 out.go:201] 
	W0914 01:07:34.336471 1644628 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0914 01:07:34.341392 1644628 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-361936 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-361936

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-361936

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-361936

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-361936

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-361936

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-361936

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-361936

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-361936

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-361936

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-361936

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-361936

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-361936" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-361936" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 14 Sep 2024 01:07:36 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-980949
contexts:
- context:
cluster: running-upgrade-980949
user: running-upgrade-980949
name: running-upgrade-980949
current-context: running-upgrade-980949
kind: Config
preferences: {}
users:
- name: running-upgrade-980949
user:
client-certificate: /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/running-upgrade-980949/client.crt
client-key: /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/running-upgrade-980949/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-361936

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-361936"

                                                
                                                
----------------------- debugLogs end: false-361936 [took: 4.351182588s] --------------------------------
helpers_test.go:175: Cleaning up "false-361936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-361936
--- PASS: TestNetworkPlugins/group/false (4.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (154.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-610182 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0914 01:09:50.291641 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:10:29.729247 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-610182 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m34.256539753s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (154.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-610182 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5438fc1b-a149-4a06-8b46-c41ef6f716eb] Pending
helpers_test.go:344: "busybox" [5438fc1b-a149-4a06-8b46-c41ef6f716eb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5438fc1b-a149-4a06-8b46-c41ef6f716eb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002998718s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-610182 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-610182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-610182 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.471672448s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-610182 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-610182 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-610182 --alsologtostderr -v=3: (12.133284365s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-610182 -n old-k8s-version-610182
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-610182 -n old-k8s-version-610182: exit status 7 (92.632435ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-610182 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-772888 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0914 01:12:53.357568 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-772888 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m20.066498313s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-772888 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0165d334-c1ec-4fc0-8041-7c9af64ee013] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0165d334-c1ec-4fc0-8041-7c9af64ee013] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005083374s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-772888 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-772888 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-772888 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.220900385s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-772888 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-772888 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-772888 --alsologtostderr -v=3: (12.0624132s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-772888 -n no-preload-772888
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-772888 -n no-preload-772888: exit status 7 (77.079814ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-772888 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (281.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-772888 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0914 01:14:50.291264 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:15:29.729226 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-772888 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m41.03337652s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-772888 -n no-preload-772888
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (281.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-j9r9r" [7206697d-5b8f-46ca-b9a1-23428032b4aa] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004905814s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-j9r9r" [7206697d-5b8f-46ca-b9a1-23428032b4aa] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003918572s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-610182 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-c4snv" [40de3527-2836-4e3e-9e89-0b08fa27d046] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004644589s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-610182 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-610182 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-610182 -n old-k8s-version-610182
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-610182 -n old-k8s-version-610182: exit status 2 (321.714033ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-610182 -n old-k8s-version-610182
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-610182 -n old-k8s-version-610182: exit status 2 (319.911758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-610182 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-610182 -n old-k8s-version-610182
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-610182 -n old-k8s-version-610182
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-c4snv" [40de3527-2836-4e3e-9e89-0b08fa27d046] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003950756s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-772888 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-777354 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-777354 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m33.138133369s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-772888 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-772888 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-772888 --alsologtostderr -v=1: (1.1534265s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-772888 -n no-preload-772888
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-772888 -n no-preload-772888: exit status 2 (335.58575ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-772888 -n no-preload-772888
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-772888 -n no-preload-772888: exit status 2 (337.638954ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-772888 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-772888 -n no-preload-772888
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-772888 -n no-preload-772888
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (96.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-531061 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0914 01:19:50.291243 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-531061 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m36.329815539s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (96.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-777354 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e5f4f60c-825b-4819-a7c8-9a582925f54f] Pending
helpers_test.go:344: "busybox" [e5f4f60c-825b-4819-a7c8-9a582925f54f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e5f4f60c-825b-4819-a7c8-9a582925f54f] Running
E0914 01:20:29.729693 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003801952s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-777354 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-777354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-777354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.112667521s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-777354 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-777354 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-777354 --alsologtostderr -v=3: (12.083831657s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-531061 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [08db07b4-aacf-4d7f-b065-338fe0a3834b] Pending
helpers_test.go:344: "busybox" [08db07b4-aacf-4d7f-b065-338fe0a3834b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [08db07b4-aacf-4d7f-b065-338fe0a3834b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004283582s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-531061 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-777354 -n embed-certs-777354
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-777354 -n embed-certs-777354: exit status 7 (89.667676ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-777354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-777354 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-777354 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.608772943s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-777354 -n embed-certs-777354
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-531061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-531061 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-531061 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-531061 --alsologtostderr -v=3: (12.28028762s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-531061 -n default-k8s-diff-port-531061
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-531061 -n default-k8s-diff-port-531061: exit status 7 (87.364257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-531061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-531061 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0914 01:21:50.594581 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:21:50.600964 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:21:50.612338 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:21:50.633705 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:21:50.675233 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:21:50.756778 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:21:50.918423 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:21:51.239946 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:21:51.881545 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:21:53.163757 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:21:55.725948 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:22:00.847575 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:22:11.089435 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:22:31.571817 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:12.533415 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:36.808534 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:36.814967 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:36.826330 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:36.847780 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:36.889116 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:36.970507 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:37.131968 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:37.453773 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:38.095115 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:39.376570 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:41.938530 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:47.060209 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:23:57.301578 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:24:17.782831 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:24:34.455314 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:24:50.291760 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:24:58.744598 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-531061 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (5m2.658180642s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-531061 -n default-k8s-diff-port-531061
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (303.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lkmzc" [6232878c-3c89-4cee-8eef-393c258c094c] Running
E0914 01:25:12.801640 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003330544s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lkmzc" [6232878c-3c89-4cee-8eef-393c258c094c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004366213s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-777354 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-777354 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-777354 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-777354 -n embed-certs-777354
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-777354 -n embed-certs-777354: exit status 2 (323.966172ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-777354 -n embed-certs-777354
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-777354 -n embed-certs-777354: exit status 2 (327.787452ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-777354 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-777354 -n embed-certs-777354
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-777354 -n embed-certs-777354
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-702048 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0914 01:25:29.729444 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/addons-131319/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-702048 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (37.149742417s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-cn6dw" [63ec67c3-4a98-48ac-8727-b9598e09cfa8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004245068s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-702048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-702048 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.071320882s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-702048 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-702048 --alsologtostderr -v=3: (1.261467568s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-702048 -n newest-cni-702048
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-702048 -n newest-cni-702048: exit status 7 (71.348721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-702048 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-702048 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-702048 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (21.935323782s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-702048 -n newest-cni-702048
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-cn6dw" [63ec67c3-4a98-48ac-8727-b9598e09cfa8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003801592s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-531061 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-531061 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-531061 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-531061 --alsologtostderr -v=1: (1.048372352s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-531061 -n default-k8s-diff-port-531061
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-531061 -n default-k8s-diff-port-531061: exit status 2 (410.29737ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-531061 -n default-k8s-diff-port-531061
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-531061 -n default-k8s-diff-port-531061: exit status 2 (408.702856ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-531061 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-531061 --alsologtostderr -v=1: (1.30005175s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-531061 -n default-k8s-diff-port-531061
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-531061 -n default-k8s-diff-port-531061
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (98.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m38.61695218s)
--- PASS: TestNetworkPlugins/group/auto/Start (98.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-702048 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-702048 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-702048 -n newest-cni-702048
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-702048 -n newest-cni-702048: exit status 2 (408.475442ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-702048 -n newest-cni-702048
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-702048 -n newest-cni-702048: exit status 2 (430.920464ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-702048 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-702048 -n newest-cni-702048
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-702048 -n newest-cni-702048
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (86.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0914 01:26:50.593651 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:27:18.297833 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m26.242263126s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (86.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-361936 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-361936 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7rb6w" [25d62bc8-ce14-48c8-aae1-2af96bbcb018] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7rb6w" [25d62bc8-ce14-48c8-aae1-2af96bbcb018] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004359062s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-67mfl" [ef68ae0a-cf75-4202-a662-a575709e47a2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004500907s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-361936 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-361936 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6579s" [72701294-2349-459f-8d43-dfc5001cebb5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6579s" [72701294-2349-459f-8d43-dfc5001cebb5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.005952037s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-361936 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-361936 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0914 01:28:36.807897 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m12.346089338s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0914 01:29:04.508461 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/no-preload-772888/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:29:33.359638 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m0.108187441s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-361936 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-361936 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8n66n" [8086e577-e3fd-41c1-8a30-82fca9b19fb8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8n66n" [8086e577-e3fd-41c1-8a30-82fca9b19fb8] Running
E0914 01:29:50.292012 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/functional-089303/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.013046025s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5x8nf" [e4fd0eec-5ffe-4557-962d-398cd985374b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004842634s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-361936 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-361936 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mpgc2" [1cd8fd59-b09d-4e20-a56d-1c7bdffc554e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mpgc2" [1cd8fd59-b09d-4e20-a56d-1c7bdffc554e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004859652s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-361936 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-361936 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (53.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (53.316028563s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (53.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0914 01:30:37.159210 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:30:37.167635 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:30:37.182144 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:30:37.203971 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:30:37.247772 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:30:37.330350 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:30:37.491961 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:30:37.813940 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:30:38.456206 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:30:39.738000 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:30:42.300199 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:30:47.421621 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:30:57.662941 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (56.885966905s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-361936 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-361936 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7qsth" [5331f012-ac5d-464f-be5c-ee194545e6cf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 01:31:18.145039 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-7qsth" [5331f012-ac5d-464f-be5c-ee194545e6cf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004032144s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-361936 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kczdg" [5457bf1a-36a1-4209-a399-2db398dd081c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.010349244s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-361936 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-361936 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cvlzr" [4f72811a-9082-4e82-b2d4-1a19e59b20e9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cvlzr" [4f72811a-9082-4e82-b2d4-1a19e59b20e9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004416615s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-361936 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0914 01:31:50.594417 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/old-k8s-version-610182/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:31:59.106651 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/default-k8s-diff-port-531061/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-361936 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m14.628770838s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-361936 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-361936 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sqvsk" [993ba3ae-f851-4b26-bd89-de9871e9a367] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 01:33:01.807757 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/auto-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:01.814104 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/auto-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:01.825486 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/auto-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:01.847012 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/auto-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:01.888383 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/auto-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:01.969765 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/auto-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:02.131339 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/auto-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:02.453284 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/auto-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:03.095361 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/auto-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:03.940219 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/kindnet-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:03.946603 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/kindnet-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:03.958240 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/kindnet-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:03.979609 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/kindnet-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:04.021041 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/kindnet-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:04.102787 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/kindnet-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:04.264480 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/kindnet-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:04.376962 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/auto-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:04.585897 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/kindnet-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:05.227253 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/kindnet-361936/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-sqvsk" [993ba3ae-f851-4b26-bd89-de9871e9a367] Running
E0914 01:33:06.508837 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/kindnet-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:06.939247 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/auto-361936/client.crt: no such file or directory" logger="UnhandledError"
E0914 01:33:09.070868 1459848 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/kindnet-361936/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003749608s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-361936 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-361936 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-399396 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-399396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-399396
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-524130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-524130
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-361936 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-361936

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-361936

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-361936

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-361936

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-361936

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-361936

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-361936

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-361936

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-361936

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-361936

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-361936

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-361936" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-361936" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-361936

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-361936"

                                                
                                                
----------------------- debugLogs end: kubenet-361936 [took: 5.325661159s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-361936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-361936
--- SKIP: TestNetworkPlugins/group/kubenet (5.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-361936 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-361936" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19640-1454467/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 14 Sep 2024 01:07:41 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: running-upgrade-980949
contexts:
- context:
cluster: running-upgrade-980949
extensions:
- extension:
last-update: Sat, 14 Sep 2024 01:07:41 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: running-upgrade-980949
name: running-upgrade-980949
current-context: running-upgrade-980949
kind: Config
preferences: {}
users:
- name: running-upgrade-980949
user:
client-certificate: /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/running-upgrade-980949/client.crt
client-key: /home/jenkins/minikube-integration/19640-1454467/.minikube/profiles/running-upgrade-980949/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-361936

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-361936" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-361936"

                                                
                                                
----------------------- debugLogs end: cilium-361936 [took: 6.338813025s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-361936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-361936
--- SKIP: TestNetworkPlugins/group/cilium (6.58s)

                                                
                                    
Copied to clipboard