Test Report: Docker_Linux_containerd_arm64 18424

                    
                      1ff1985e433cf64121c1d5b23135320107f58df6:2024-10-07:36542
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 211.24
302 TestStartStop/group/old-k8s-version/serial/SecondStart 374.37
x
+
TestAddons/serial/Volcano (211.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:811: volcano-admission stabilized in 49.940381ms
addons_test.go:803: volcano-scheduler stabilized in 50.499379ms
addons_test.go:819: volcano-controller stabilized in 50.554418ms
addons_test.go:825: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-f5pr5" [7afa2b0e-d001-4e10-b72c-cf6e7516f5e0] Running
addons_test.go:825: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003745444s
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-58pv9" [cdf0cd39-f452-470a-a4d9-fd265de4917f] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004184274s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-k4lc4" [a21f40db-34ef-4b72-929b-4977f65b8c8e] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00328516s
addons_test.go:838: (dbg) Run:  kubectl --context addons-956205 delete -n volcano-system job volcano-admission-init
addons_test.go:844: (dbg) Run:  kubectl --context addons-956205 create -f testdata/vcjob.yaml
addons_test.go:852: (dbg) Run:  kubectl --context addons-956205 get vcjob -n my-volcano
addons_test.go:870: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [900e759b-79cf-48f9-b955-3e5e06267a3d] Pending
helpers_test.go:344: "test-job-nginx-0" [900e759b-79cf-48f9-b955-3e5e06267a3d] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:870: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-956205 -n addons-956205
addons_test.go:870: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-10-07 13:02:50.381546114 +0000 UTC m=+434.245939935
addons_test.go:870: (dbg) Run:  kubectl --context addons-956205 describe po test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-956205 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-ea3c34f9-067e-425d-be6f-fe3a1a1ddf17
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fj5gf (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-fj5gf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:870: (dbg) Run:  kubectl --context addons-956205 logs test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-956205 logs test-job-nginx-0 -n my-volcano:
addons_test.go:871: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-956205
helpers_test.go:235: (dbg) docker inspect addons-956205:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "77ebf52ee0f9a411d4396b93482b36631dc7c619ce40f172a636528e99025131",
	        "Created": "2024-10-07T12:56:18.226747118Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 581399,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T12:56:18.365652673Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/77ebf52ee0f9a411d4396b93482b36631dc7c619ce40f172a636528e99025131/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/77ebf52ee0f9a411d4396b93482b36631dc7c619ce40f172a636528e99025131/hostname",
	        "HostsPath": "/var/lib/docker/containers/77ebf52ee0f9a411d4396b93482b36631dc7c619ce40f172a636528e99025131/hosts",
	        "LogPath": "/var/lib/docker/containers/77ebf52ee0f9a411d4396b93482b36631dc7c619ce40f172a636528e99025131/77ebf52ee0f9a411d4396b93482b36631dc7c619ce40f172a636528e99025131-json.log",
	        "Name": "/addons-956205",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-956205:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-956205",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/3096afad72f06555367c792ffdbff68493980ecad03f712acc212642d46e49d9-init/diff:/var/lib/docker/overlay2/e63a2c5503af6c1a5c1dd965c5cc29d76da2a1b8721a0b9206304ab209f33143/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3096afad72f06555367c792ffdbff68493980ecad03f712acc212642d46e49d9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3096afad72f06555367c792ffdbff68493980ecad03f712acc212642d46e49d9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3096afad72f06555367c792ffdbff68493980ecad03f712acc212642d46e49d9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-956205",
	                "Source": "/var/lib/docker/volumes/addons-956205/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-956205",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-956205",
	                "name.minikube.sigs.k8s.io": "addons-956205",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2404031fe90e49c18cfbd51066b36d86126d91e25399896fee0801ef832220c3",
	            "SandboxKey": "/var/run/docker/netns/2404031fe90e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33508"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-956205": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "63bd0c44f7b9dbbc0b6d9a54387a97791ccc3ac4f5413cf733fbbdc118459fe9",
	                    "EndpointID": "edc273eecd9e7641d834a4f6bb90c601334b69e9ed4c060a53a869a2a85ce3cf",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-956205",
	                        "77ebf52ee0f9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-956205 -n addons-956205
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-956205 logs -n 25: (1.622306963s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-567694   | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC |                     |
	|         | -p download-only-567694              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC | 07 Oct 24 12:55 UTC |
	| delete  | -p download-only-567694              | download-only-567694   | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC | 07 Oct 24 12:55 UTC |
	| start   | -o=json --download-only              | download-only-985583   | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC |                     |
	|         | -p download-only-985583              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC | 07 Oct 24 12:55 UTC |
	| delete  | -p download-only-985583              | download-only-985583   | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC | 07 Oct 24 12:55 UTC |
	| delete  | -p download-only-567694              | download-only-567694   | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC | 07 Oct 24 12:55 UTC |
	| delete  | -p download-only-985583              | download-only-985583   | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC | 07 Oct 24 12:55 UTC |
	| start   | --download-only -p                   | download-docker-349416 | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC |                     |
	|         | download-docker-349416               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-349416            | download-docker-349416 | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC | 07 Oct 24 12:55 UTC |
	| start   | --download-only -p                   | binary-mirror-013254   | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC |                     |
	|         | binary-mirror-013254                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36647               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-013254              | binary-mirror-013254   | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC | 07 Oct 24 12:55 UTC |
	| addons  | enable dashboard -p                  | addons-956205          | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC |                     |
	|         | addons-956205                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-956205          | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC |                     |
	|         | addons-956205                        |                        |         |         |                     |                     |
	| start   | -p addons-956205 --wait=true         | addons-956205          | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC | 07 Oct 24 12:59 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:55:52
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:55:52.324074  580915 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:55:52.324222  580915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:55:52.324235  580915 out.go:358] Setting ErrFile to fd 2...
	I1007 12:55:52.324241  580915 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:55:52.324512  580915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
	I1007 12:55:52.324961  580915 out.go:352] Setting JSON to false
	I1007 12:55:52.325860  580915 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9501,"bootTime":1728296251,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1007 12:55:52.325933  580915 start.go:139] virtualization:  
	I1007 12:55:52.328371  580915 out.go:177] * [addons-956205] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 12:55:52.330751  580915 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 12:55:52.330872  580915 notify.go:220] Checking for updates...
	I1007 12:55:52.334440  580915 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:55:52.336252  580915 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 12:55:52.337953  580915 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	I1007 12:55:52.339829  580915 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 12:55:52.341754  580915 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 12:55:52.343975  580915 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:55:52.371247  580915 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 12:55:52.371371  580915 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:55:52.431535  580915 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-07 12:55:52.419362711 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:55:52.431659  580915 docker.go:318] overlay module found
	I1007 12:55:52.433858  580915 out.go:177] * Using the docker driver based on user configuration
	I1007 12:55:52.435997  580915 start.go:297] selected driver: docker
	I1007 12:55:52.436022  580915 start.go:901] validating driver "docker" against <nil>
	I1007 12:55:52.436037  580915 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 12:55:52.436667  580915 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:55:52.486459  580915 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-07 12:55:52.477462986 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:55:52.486677  580915 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:55:52.486900  580915 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:55:52.488842  580915 out.go:177] * Using Docker driver with root privileges
	I1007 12:55:52.490733  580915 cni.go:84] Creating CNI manager for ""
	I1007 12:55:52.490791  580915 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 12:55:52.490811  580915 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 12:55:52.490903  580915 start.go:340] cluster config:
	{Name:addons-956205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-956205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:55:52.493222  580915 out.go:177] * Starting "addons-956205" primary control-plane node in "addons-956205" cluster
	I1007 12:55:52.494987  580915 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1007 12:55:52.497033  580915 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 12:55:52.498743  580915 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 12:55:52.498801  580915 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1007 12:55:52.498814  580915 cache.go:56] Caching tarball of preloaded images
	I1007 12:55:52.498832  580915 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 12:55:52.498898  580915 preload.go:172] Found /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 12:55:52.498907  580915 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1007 12:55:52.499253  580915 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/config.json ...
	I1007 12:55:52.499274  580915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/config.json: {Name:mk44bb64266d2a7477f52973369cc95308b81db9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:55:52.514828  580915 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 12:55:52.514952  580915 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 12:55:52.514977  580915 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1007 12:55:52.514987  580915 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1007 12:55:52.514995  580915 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1007 12:55:52.515004  580915 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1007 12:56:10.203895  580915 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1007 12:56:10.203936  580915 cache.go:194] Successfully downloaded all kic artifacts
	I1007 12:56:10.203969  580915 start.go:360] acquireMachinesLock for addons-956205: {Name:mkef8a260a87a7a310d3de4815db917d957ba6e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 12:56:10.204101  580915 start.go:364] duration metric: took 105.976µs to acquireMachinesLock for "addons-956205"
	I1007 12:56:10.204137  580915 start.go:93] Provisioning new machine with config: &{Name:addons-956205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-956205 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1007 12:56:10.204247  580915 start.go:125] createHost starting for "" (driver="docker")
	I1007 12:56:10.206838  580915 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1007 12:56:10.207157  580915 start.go:159] libmachine.API.Create for "addons-956205" (driver="docker")
	I1007 12:56:10.207209  580915 client.go:168] LocalClient.Create starting
	I1007 12:56:10.207349  580915 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem
	I1007 12:56:10.777247  580915 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/cert.pem
	I1007 12:56:11.876863  580915 cli_runner.go:164] Run: docker network inspect addons-956205 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1007 12:56:11.892492  580915 cli_runner.go:211] docker network inspect addons-956205 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1007 12:56:11.892581  580915 network_create.go:284] running [docker network inspect addons-956205] to gather additional debugging logs...
	I1007 12:56:11.892605  580915 cli_runner.go:164] Run: docker network inspect addons-956205
	W1007 12:56:11.908420  580915 cli_runner.go:211] docker network inspect addons-956205 returned with exit code 1
	I1007 12:56:11.908451  580915 network_create.go:287] error running [docker network inspect addons-956205]: docker network inspect addons-956205: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-956205 not found
	I1007 12:56:11.908465  580915 network_create.go:289] output of [docker network inspect addons-956205]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-956205 not found
	
	** /stderr **
	I1007 12:56:11.908565  580915 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 12:56:11.928294  580915 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001911ed0}
	I1007 12:56:11.928342  580915 network_create.go:124] attempt to create docker network addons-956205 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1007 12:56:11.928400  580915 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-956205 addons-956205
	I1007 12:56:12.018514  580915 network_create.go:108] docker network addons-956205 192.168.49.0/24 created
	I1007 12:56:12.018578  580915 kic.go:121] calculated static IP "192.168.49.2" for the "addons-956205" container
	I1007 12:56:12.018717  580915 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1007 12:56:12.033271  580915 cli_runner.go:164] Run: docker volume create addons-956205 --label name.minikube.sigs.k8s.io=addons-956205 --label created_by.minikube.sigs.k8s.io=true
	I1007 12:56:12.055870  580915 oci.go:103] Successfully created a docker volume addons-956205
	I1007 12:56:12.055976  580915 cli_runner.go:164] Run: docker run --rm --name addons-956205-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-956205 --entrypoint /usr/bin/test -v addons-956205:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1007 12:56:14.076665  580915 cli_runner.go:217] Completed: docker run --rm --name addons-956205-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-956205 --entrypoint /usr/bin/test -v addons-956205:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (2.020642252s)
	I1007 12:56:14.076699  580915 oci.go:107] Successfully prepared a docker volume addons-956205
	I1007 12:56:14.076723  580915 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 12:56:14.076743  580915 kic.go:194] Starting extracting preloaded images to volume ...
	I1007 12:56:14.076816  580915 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-956205:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1007 12:56:18.149288  580915 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-956205:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.072431997s)
	I1007 12:56:18.149322  580915 kic.go:203] duration metric: took 4.072576397s to extract preloaded images to volume ...
	W1007 12:56:18.149494  580915 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1007 12:56:18.149622  580915 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1007 12:56:18.212964  580915 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-956205 --name addons-956205 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-956205 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-956205 --network addons-956205 --ip 192.168.49.2 --volume addons-956205:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1007 12:56:18.532513  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Running}}
	I1007 12:56:18.558723  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:18.584123  580915 cli_runner.go:164] Run: docker exec addons-956205 stat /var/lib/dpkg/alternatives/iptables
	I1007 12:56:18.649960  580915 oci.go:144] the created container "addons-956205" has a running status.
	I1007 12:56:18.649995  580915 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa...
	I1007 12:56:18.900980  580915 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1007 12:56:18.927898  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:18.952963  580915 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1007 12:56:18.952986  580915 kic_runner.go:114] Args: [docker exec --privileged addons-956205 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1007 12:56:19.029092  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:19.061943  580915 machine.go:93] provisionDockerMachine start ...
	I1007 12:56:19.062033  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:19.085711  580915 main.go:141] libmachine: Using SSH client type: native
	I1007 12:56:19.085993  580915 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I1007 12:56:19.086005  580915 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 12:56:19.086654  580915 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59926->127.0.0.1:33504: read: connection reset by peer
	I1007 12:56:22.225224  580915 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-956205
	
	I1007 12:56:22.225251  580915 ubuntu.go:169] provisioning hostname "addons-956205"
	I1007 12:56:22.225323  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:22.242609  580915 main.go:141] libmachine: Using SSH client type: native
	I1007 12:56:22.242860  580915 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I1007 12:56:22.242877  580915 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-956205 && echo "addons-956205" | sudo tee /etc/hostname
	I1007 12:56:22.389881  580915 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-956205
	
	I1007 12:56:22.389965  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:22.408196  580915 main.go:141] libmachine: Using SSH client type: native
	I1007 12:56:22.408450  580915 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33504 <nil> <nil>}
	I1007 12:56:22.408473  580915 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-956205' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-956205/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-956205' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 12:56:22.541830  580915 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 12:56:22.541857  580915 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18424-574640/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-574640/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-574640/.minikube}
	I1007 12:56:22.541890  580915 ubuntu.go:177] setting up certificates
	I1007 12:56:22.541900  580915 provision.go:84] configureAuth start
	I1007 12:56:22.541968  580915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-956205
	I1007 12:56:22.558617  580915 provision.go:143] copyHostCerts
	I1007 12:56:22.558706  580915 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-574640/.minikube/ca.pem (1082 bytes)
	I1007 12:56:22.558845  580915 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-574640/.minikube/cert.pem (1123 bytes)
	I1007 12:56:22.558913  580915 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-574640/.minikube/key.pem (1679 bytes)
	I1007 12:56:22.558964  580915 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-574640/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca-key.pem org=jenkins.addons-956205 san=[127.0.0.1 192.168.49.2 addons-956205 localhost minikube]
	I1007 12:56:23.284851  580915 provision.go:177] copyRemoteCerts
	I1007 12:56:23.284920  580915 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 12:56:23.284962  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:23.301576  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:23.398875  580915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 12:56:23.424123  580915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1007 12:56:23.449261  580915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1007 12:56:23.473811  580915 provision.go:87] duration metric: took 931.890494ms to configureAuth
	I1007 12:56:23.473838  580915 ubuntu.go:193] setting minikube options for container-runtime
	I1007 12:56:23.474032  580915 config.go:182] Loaded profile config "addons-956205": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:56:23.474049  580915 machine.go:96] duration metric: took 4.412085605s to provisionDockerMachine
	I1007 12:56:23.474056  580915 client.go:171] duration metric: took 13.266840881s to LocalClient.Create
	I1007 12:56:23.474078  580915 start.go:167] duration metric: took 13.266922037s to libmachine.API.Create "addons-956205"
	I1007 12:56:23.474100  580915 start.go:293] postStartSetup for "addons-956205" (driver="docker")
	I1007 12:56:23.474111  580915 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 12:56:23.474179  580915 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 12:56:23.474225  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:23.490706  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:23.588015  580915 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 12:56:23.591209  580915 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 12:56:23.591250  580915 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 12:56:23.591262  580915 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 12:56:23.591270  580915 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 12:56:23.591282  580915 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-574640/.minikube/addons for local assets ...
	I1007 12:56:23.591369  580915 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-574640/.minikube/files for local assets ...
	I1007 12:56:23.591396  580915 start.go:296] duration metric: took 117.288418ms for postStartSetup
	I1007 12:56:23.591718  580915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-956205
	I1007 12:56:23.608821  580915 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/config.json ...
	I1007 12:56:23.609118  580915 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 12:56:23.609173  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:23.626483  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:23.718801  580915 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 12:56:23.723484  580915 start.go:128] duration metric: took 13.519220107s to createHost
	I1007 12:56:23.723512  580915 start.go:83] releasing machines lock for "addons-956205", held for 13.519395177s
	I1007 12:56:23.723588  580915 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-956205
	I1007 12:56:23.739553  580915 ssh_runner.go:195] Run: cat /version.json
	I1007 12:56:23.739624  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:23.739893  580915 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 12:56:23.739965  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:23.762728  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:23.769080  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:23.853283  580915 ssh_runner.go:195] Run: systemctl --version
	I1007 12:56:23.986506  580915 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 12:56:23.991092  580915 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1007 12:56:24.027869  580915 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1007 12:56:24.027969  580915 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 12:56:24.062007  580915 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1007 12:56:24.062077  580915 start.go:495] detecting cgroup driver to use...
	I1007 12:56:24.062129  580915 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 12:56:24.062190  580915 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1007 12:56:24.076451  580915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 12:56:24.089467  580915 docker.go:217] disabling cri-docker service (if available) ...
	I1007 12:56:24.089591  580915 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 12:56:24.105009  580915 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 12:56:24.121799  580915 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 12:56:24.218981  580915 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 12:56:24.318601  580915 docker.go:233] disabling docker service ...
	I1007 12:56:24.318725  580915 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 12:56:24.342572  580915 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 12:56:24.354865  580915 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 12:56:24.450222  580915 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 12:56:24.544975  580915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 12:56:24.556598  580915 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 12:56:24.574166  580915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1007 12:56:24.584558  580915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1007 12:56:24.594544  580915 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1007 12:56:24.594614  580915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1007 12:56:24.604762  580915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 12:56:24.615000  580915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1007 12:56:24.624986  580915 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 12:56:24.635737  580915 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 12:56:24.645719  580915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1007 12:56:24.655931  580915 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1007 12:56:24.667017  580915 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1007 12:56:24.677019  580915 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 12:56:24.685481  580915 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 12:56:24.693840  580915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:56:24.776542  580915 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1007 12:56:24.903402  580915 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1007 12:56:24.903564  580915 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1007 12:56:24.907306  580915 start.go:563] Will wait 60s for crictl version
	I1007 12:56:24.907417  580915 ssh_runner.go:195] Run: which crictl
	I1007 12:56:24.910960  580915 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 12:56:24.954223  580915 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1007 12:56:24.954304  580915 ssh_runner.go:195] Run: containerd --version
	I1007 12:56:24.976837  580915 ssh_runner.go:195] Run: containerd --version
	I1007 12:56:25.015660  580915 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1007 12:56:25.017855  580915 cli_runner.go:164] Run: docker network inspect addons-956205 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 12:56:25.035255  580915 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1007 12:56:25.039496  580915 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:56:25.051914  580915 kubeadm.go:883] updating cluster {Name:addons-956205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-956205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 12:56:25.052057  580915 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 12:56:25.052127  580915 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:56:25.090625  580915 containerd.go:627] all images are preloaded for containerd runtime.
	I1007 12:56:25.090653  580915 containerd.go:534] Images already preloaded, skipping extraction
	I1007 12:56:25.090728  580915 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 12:56:25.132633  580915 containerd.go:627] all images are preloaded for containerd runtime.
	I1007 12:56:25.132658  580915 cache_images.go:84] Images are preloaded, skipping loading
	I1007 12:56:25.132666  580915 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I1007 12:56:25.132773  580915 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-956205 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-956205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 12:56:25.132849  580915 ssh_runner.go:195] Run: sudo crictl info
	I1007 12:56:25.171067  580915 cni.go:84] Creating CNI manager for ""
	I1007 12:56:25.171100  580915 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 12:56:25.171112  580915 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 12:56:25.171142  580915 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-956205 NodeName:addons-956205 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 12:56:25.171315  580915 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-956205"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 12:56:25.171422  580915 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 12:56:25.185778  580915 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 12:56:25.185865  580915 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 12:56:25.195063  580915 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1007 12:56:25.213587  580915 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 12:56:25.232227  580915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I1007 12:56:25.250286  580915 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1007 12:56:25.253791  580915 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 12:56:25.264831  580915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:56:25.350977  580915 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:56:25.365620  580915 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205 for IP: 192.168.49.2
	I1007 12:56:25.365776  580915 certs.go:194] generating shared ca certs ...
	I1007 12:56:25.365810  580915 certs.go:226] acquiring lock for ca certs: {Name:mkb94cd23ae3efb673f2949842bd2c98014816e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:25.365971  580915 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-574640/.minikube/ca.key
	I1007 12:56:26.375852  580915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-574640/.minikube/ca.crt ...
	I1007 12:56:26.375886  580915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/.minikube/ca.crt: {Name:mkb4bf4ef64e25b321f5aa800e9b402225f44ab4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:26.376089  580915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-574640/.minikube/ca.key ...
	I1007 12:56:26.376103  580915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/.minikube/ca.key: {Name:mk85ef74308538ec692a0e75006cef8bccfdd519 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:26.376190  580915 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.key
	I1007 12:56:27.123944  580915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.crt ...
	I1007 12:56:27.123978  580915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.crt: {Name:mk0d85b51b24242c3bbd731960c74cbac7cffa48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:27.124186  580915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.key ...
	I1007 12:56:27.124201  580915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.key: {Name:mk1e658e6f4a74c436ed45a4369af3980d61f94a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:27.124286  580915 certs.go:256] generating profile certs ...
	I1007 12:56:27.124348  580915 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.key
	I1007 12:56:27.124375  580915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt with IP's: []
	I1007 12:56:27.342541  580915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt ...
	I1007 12:56:27.342574  580915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: {Name:mk69c1781417df6ce43ae111cffacd6e31b3e45d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:27.342759  580915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.key ...
	I1007 12:56:27.342769  580915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.key: {Name:mke4bf3d9dcd1223936951b6a4bfde3e2e9eafd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:27.342850  580915 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/apiserver.key.852628a0
	I1007 12:56:27.342870  580915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/apiserver.crt.852628a0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1007 12:56:27.628027  580915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/apiserver.crt.852628a0 ...
	I1007 12:56:27.628059  580915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/apiserver.crt.852628a0: {Name:mk31b7a389e20f9419e76acca3328a657ae661c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:27.628254  580915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/apiserver.key.852628a0 ...
	I1007 12:56:27.628270  580915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/apiserver.key.852628a0: {Name:mkd9e4ae91085746349c8ba1117d0a0db12882fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:27.628358  580915 certs.go:381] copying /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/apiserver.crt.852628a0 -> /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/apiserver.crt
	I1007 12:56:27.628439  580915 certs.go:385] copying /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/apiserver.key.852628a0 -> /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/apiserver.key
	I1007 12:56:27.628494  580915 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/proxy-client.key
	I1007 12:56:27.628514  580915 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/proxy-client.crt with IP's: []
	I1007 12:56:28.215342  580915 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/proxy-client.crt ...
	I1007 12:56:28.215382  580915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/proxy-client.crt: {Name:mk92a042fe54d3386eb9e731ea9ff86513d2eb4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:28.215589  580915 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/proxy-client.key ...
	I1007 12:56:28.215604  580915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/proxy-client.key: {Name:mk34dd3ef41115a5dd46f471d59903b5c920abb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:28.215827  580915 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 12:56:28.215871  580915 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem (1082 bytes)
	I1007 12:56:28.215905  580915 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/cert.pem (1123 bytes)
	I1007 12:56:28.215932  580915 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/key.pem (1679 bytes)
	I1007 12:56:28.216547  580915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 12:56:28.242137  580915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 12:56:28.266742  580915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 12:56:28.291479  580915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 12:56:28.319836  580915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1007 12:56:28.347267  580915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1007 12:56:28.373636  580915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 12:56:28.399585  580915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 12:56:28.424324  580915 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 12:56:28.449921  580915 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 12:56:28.469347  580915 ssh_runner.go:195] Run: openssl version
	I1007 12:56:28.475438  580915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 12:56:28.485364  580915 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:56:28.488722  580915 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:56:28.488794  580915 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 12:56:28.495702  580915 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 12:56:28.505720  580915 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 12:56:28.509104  580915 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1007 12:56:28.509154  580915 kubeadm.go:392] StartCluster: {Name:addons-956205 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-956205 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:56:28.509234  580915 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1007 12:56:28.509292  580915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 12:56:28.548206  580915 cri.go:89] found id: ""
	I1007 12:56:28.548278  580915 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 12:56:28.557309  580915 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1007 12:56:28.566388  580915 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1007 12:56:28.566479  580915 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1007 12:56:28.575550  580915 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1007 12:56:28.575571  580915 kubeadm.go:157] found existing configuration files:
	
	I1007 12:56:28.575622  580915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1007 12:56:28.584600  580915 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1007 12:56:28.584697  580915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1007 12:56:28.594357  580915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1007 12:56:28.603094  580915 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1007 12:56:28.603165  580915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1007 12:56:28.611595  580915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1007 12:56:28.621392  580915 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1007 12:56:28.621462  580915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1007 12:56:28.630785  580915 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1007 12:56:28.640178  580915 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1007 12:56:28.640295  580915 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1007 12:56:28.648796  580915 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1007 12:56:28.692421  580915 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1007 12:56:28.692501  580915 kubeadm.go:310] [preflight] Running pre-flight checks
	I1007 12:56:28.712152  580915 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1007 12:56:28.712228  580915 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1007 12:56:28.712267  580915 kubeadm.go:310] OS: Linux
	I1007 12:56:28.712319  580915 kubeadm.go:310] CGROUPS_CPU: enabled
	I1007 12:56:28.712378  580915 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1007 12:56:28.712432  580915 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1007 12:56:28.712484  580915 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1007 12:56:28.712535  580915 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1007 12:56:28.712587  580915 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1007 12:56:28.712639  580915 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1007 12:56:28.712690  580915 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1007 12:56:28.712739  580915 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1007 12:56:28.775044  580915 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1007 12:56:28.775159  580915 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1007 12:56:28.775253  580915 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1007 12:56:28.780996  580915 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1007 12:56:28.786397  580915 out.go:235]   - Generating certificates and keys ...
	I1007 12:56:28.786570  580915 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1007 12:56:28.786661  580915 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1007 12:56:29.404365  580915 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1007 12:56:30.241971  580915 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1007 12:56:30.544951  580915 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1007 12:56:31.342322  580915 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1007 12:56:31.967025  580915 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1007 12:56:31.967342  580915 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-956205 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1007 12:56:32.201584  580915 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1007 12:56:32.201879  580915 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-956205 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1007 12:56:33.007114  580915 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1007 12:56:33.412303  580915 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1007 12:56:33.968747  580915 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1007 12:56:33.968968  580915 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1007 12:56:34.100879  580915 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1007 12:56:34.421433  580915 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1007 12:56:34.634849  580915 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1007 12:56:34.860290  580915 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1007 12:56:35.474422  580915 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1007 12:56:35.475152  580915 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1007 12:56:35.479209  580915 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1007 12:56:35.481616  580915 out.go:235]   - Booting up control plane ...
	I1007 12:56:35.481744  580915 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1007 12:56:35.482124  580915 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1007 12:56:35.483540  580915 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1007 12:56:35.497080  580915 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1007 12:56:35.503599  580915 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1007 12:56:35.503662  580915 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1007 12:56:35.628498  580915 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1007 12:56:35.628631  580915 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1007 12:56:37.629625  580915 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001521952s
	I1007 12:56:37.629740  580915 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1007 12:56:43.631237  580915 kubeadm.go:310] [api-check] The API server is healthy after 6.001593226s
	I1007 12:56:43.650824  580915 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1007 12:56:43.670041  580915 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1007 12:56:43.697063  580915 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1007 12:56:43.697563  580915 kubeadm.go:310] [mark-control-plane] Marking the node addons-956205 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1007 12:56:43.707763  580915 kubeadm.go:310] [bootstrap-token] Using token: 93ip34.aiyhl758r5dmpf7m
	I1007 12:56:43.711340  580915 out.go:235]   - Configuring RBAC rules ...
	I1007 12:56:43.711479  580915 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1007 12:56:43.716198  580915 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1007 12:56:43.725453  580915 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1007 12:56:43.730511  580915 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1007 12:56:43.735315  580915 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1007 12:56:43.739058  580915 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1007 12:56:44.039502  580915 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1007 12:56:44.467106  580915 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1007 12:56:45.073036  580915 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1007 12:56:45.086067  580915 kubeadm.go:310] 
	I1007 12:56:45.086149  580915 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1007 12:56:45.086156  580915 kubeadm.go:310] 
	I1007 12:56:45.086232  580915 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1007 12:56:45.086237  580915 kubeadm.go:310] 
	I1007 12:56:45.086263  580915 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1007 12:56:45.086442  580915 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1007 12:56:45.086503  580915 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1007 12:56:45.086508  580915 kubeadm.go:310] 
	I1007 12:56:45.086562  580915 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1007 12:56:45.086567  580915 kubeadm.go:310] 
	I1007 12:56:45.086614  580915 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1007 12:56:45.086619  580915 kubeadm.go:310] 
	I1007 12:56:45.086670  580915 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1007 12:56:45.086745  580915 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1007 12:56:45.086813  580915 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1007 12:56:45.086818  580915 kubeadm.go:310] 
	I1007 12:56:45.086901  580915 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1007 12:56:45.086977  580915 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1007 12:56:45.086985  580915 kubeadm.go:310] 
	I1007 12:56:45.087068  580915 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 93ip34.aiyhl758r5dmpf7m \
	I1007 12:56:45.087170  580915 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c2b2137dec1618e18479d111afb0e0c3860c699b3b62bdaf9a6309bd45d911e4 \
	I1007 12:56:45.087192  580915 kubeadm.go:310] 	--control-plane 
	I1007 12:56:45.087197  580915 kubeadm.go:310] 
	I1007 12:56:45.087282  580915 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1007 12:56:45.087288  580915 kubeadm.go:310] 
	I1007 12:56:45.087369  580915 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 93ip34.aiyhl758r5dmpf7m \
	I1007 12:56:45.087470  580915 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:c2b2137dec1618e18479d111afb0e0c3860c699b3b62bdaf9a6309bd45d911e4 
	I1007 12:56:45.090296  580915 kubeadm.go:310] W1007 12:56:28.689016    1029 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:56:45.090598  580915 kubeadm.go:310] W1007 12:56:28.689938    1029 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1007 12:56:45.090809  580915 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1007 12:56:45.090914  580915 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1007 12:56:45.090932  580915 cni.go:84] Creating CNI manager for ""
	I1007 12:56:45.090941  580915 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 12:56:45.093489  580915 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1007 12:56:45.095780  580915 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1007 12:56:45.100815  580915 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1007 12:56:45.100837  580915 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1007 12:56:45.177362  580915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1007 12:56:45.570201  580915 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1007 12:56:45.570267  580915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:56:45.570368  580915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-956205 minikube.k8s.io/updated_at=2024_10_07T12_56_45_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c minikube.k8s.io/name=addons-956205 minikube.k8s.io/primary=true
	I1007 12:56:45.733941  580915 ops.go:34] apiserver oom_adj: -16
	I1007 12:56:45.734056  580915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:56:46.235084  580915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:56:46.734149  580915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:56:47.235087  580915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:56:47.734806  580915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:56:48.234136  580915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:56:48.735145  580915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:56:49.234667  580915 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1007 12:56:49.368026  580915 kubeadm.go:1113] duration metric: took 3.797813426s to wait for elevateKubeSystemPrivileges
	I1007 12:56:49.368070  580915 kubeadm.go:394] duration metric: took 20.858917648s to StartCluster
	I1007 12:56:49.368088  580915 settings.go:142] acquiring lock: {Name:mk8a7c208419d2453ea37ed5e7d0421609f0d046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:49.368248  580915 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 12:56:49.368728  580915 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/kubeconfig: {Name:mk8cb646df388630470eb87db824f7b511497a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:56:49.368972  580915 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1007 12:56:49.369135  580915 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1007 12:56:49.369462  580915 config.go:182] Loaded profile config "addons-956205": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:56:49.369603  580915 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1007 12:56:49.369784  580915 addons.go:69] Setting yakd=true in profile "addons-956205"
	I1007 12:56:49.369807  580915 addons.go:234] Setting addon yakd=true in "addons-956205"
	I1007 12:56:49.369859  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.370401  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.370815  580915 addons.go:69] Setting inspektor-gadget=true in profile "addons-956205"
	I1007 12:56:49.370835  580915 addons.go:234] Setting addon inspektor-gadget=true in "addons-956205"
	I1007 12:56:49.370870  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.371277  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.371551  580915 addons.go:69] Setting metrics-server=true in profile "addons-956205"
	I1007 12:56:49.371588  580915 addons.go:234] Setting addon metrics-server=true in "addons-956205"
	I1007 12:56:49.371643  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.372122  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.372671  580915 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-956205"
	I1007 12:56:49.372699  580915 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-956205"
	I1007 12:56:49.372743  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.373253  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.377757  580915 addons.go:69] Setting cloud-spanner=true in profile "addons-956205"
	I1007 12:56:49.377808  580915 addons.go:234] Setting addon cloud-spanner=true in "addons-956205"
	I1007 12:56:49.377852  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.378584  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.385959  580915 addons.go:69] Setting registry=true in profile "addons-956205"
	I1007 12:56:49.386011  580915 addons.go:234] Setting addon registry=true in "addons-956205"
	I1007 12:56:49.386049  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.386637  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.408480  580915 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-956205"
	I1007 12:56:49.408629  580915 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-956205"
	I1007 12:56:49.408701  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.409952  580915 addons.go:69] Setting storage-provisioner=true in profile "addons-956205"
	I1007 12:56:49.409978  580915 addons.go:234] Setting addon storage-provisioner=true in "addons-956205"
	I1007 12:56:49.410014  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.410456  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.418372  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.433792  580915 addons.go:69] Setting default-storageclass=true in profile "addons-956205"
	I1007 12:56:49.433832  580915 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-956205"
	I1007 12:56:49.434203  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.445522  580915 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-956205"
	I1007 12:56:49.445655  580915 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-956205"
	I1007 12:56:49.454999  580915 addons.go:69] Setting gcp-auth=true in profile "addons-956205"
	I1007 12:56:49.455109  580915 mustload.go:65] Loading cluster: addons-956205
	I1007 12:56:49.455524  580915 config.go:182] Loaded profile config "addons-956205": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 12:56:49.455975  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.468240  580915 addons.go:69] Setting volcano=true in profile "addons-956205"
	I1007 12:56:49.468334  580915 addons.go:234] Setting addon volcano=true in "addons-956205"
	I1007 12:56:49.468429  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.473858  580915 addons.go:69] Setting volumesnapshots=true in profile "addons-956205"
	I1007 12:56:49.473941  580915 addons.go:234] Setting addon volumesnapshots=true in "addons-956205"
	I1007 12:56:49.474009  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.475056  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.487966  580915 out.go:177] * Verifying Kubernetes components...
	I1007 12:56:49.497054  580915 addons.go:69] Setting ingress=true in profile "addons-956205"
	I1007 12:56:49.497164  580915 addons.go:234] Setting addon ingress=true in "addons-956205"
	I1007 12:56:49.497247  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.497992  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.517247  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.527309  580915 addons.go:69] Setting ingress-dns=true in profile "addons-956205"
	I1007 12:56:49.527349  580915 addons.go:234] Setting addon ingress-dns=true in "addons-956205"
	I1007 12:56:49.527421  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.528120  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.541070  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.575848  580915 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 12:56:49.582190  580915 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1007 12:56:49.584122  580915 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 12:56:49.584171  580915 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 12:56:49.584264  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:49.643042  580915 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1007 12:56:49.643196  580915 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 12:56:49.644808  580915 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1007 12:56:49.644837  580915 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1007 12:56:49.644921  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:49.648562  580915 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1007 12:56:49.653884  580915 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1007 12:56:49.653919  580915 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1007 12:56:49.654028  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:49.691982  580915 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:56:49.692003  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 12:56:49.692074  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:49.704129  580915 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1007 12:56:49.704277  580915 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1007 12:56:49.705217  580915 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1007 12:56:49.735504  580915 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1007 12:56:49.743512  580915 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1007 12:56:49.743779  580915 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1007 12:56:49.744201  580915 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1007 12:56:49.744281  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:49.743879  580915 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1007 12:56:49.750163  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1007 12:56:49.751117  580915 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1007 12:56:49.755307  580915 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I1007 12:56:49.755612  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:49.765991  580915 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 12:56:49.766013  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1007 12:56:49.766078  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:49.773211  580915 out.go:177]   - Using image docker.io/registry:2.8.3
	I1007 12:56:49.775992  580915 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1007 12:56:49.778879  580915 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I1007 12:56:49.779126  580915 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1007 12:56:49.779160  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1007 12:56:49.779310  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:49.828517  580915 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I1007 12:56:49.834108  580915 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1007 12:56:49.836906  580915 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1007 12:56:49.836969  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I1007 12:56:49.837073  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:49.842330  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:49.844972  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.855593  580915 addons.go:234] Setting addon default-storageclass=true in "addons-956205"
	I1007 12:56:49.855691  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.856185  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.868263  580915 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1007 12:56:49.873117  580915 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1007 12:56:49.877857  580915 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1007 12:56:49.879450  580915 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 12:56:49.883448  580915 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1007 12:56:49.883568  580915 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1007 12:56:49.885420  580915 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 12:56:49.887387  580915 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 12:56:49.887412  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1007 12:56:49.887481  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:49.895626  580915 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1007 12:56:49.895702  580915 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1007 12:56:49.895811  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:49.905856  580915 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-956205"
	I1007 12:56:49.905906  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:49.906346  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:49.915614  580915 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1007 12:56:49.920422  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:49.934790  580915 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 12:56:49.934862  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1007 12:56:49.934967  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:49.991029  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:49.991508  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:50.015581  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:50.028798  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:50.070008  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:50.070792  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:50.074100  580915 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 12:56:50.074124  580915 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 12:56:50.074193  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:50.110259  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:50.124260  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:50.132363  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:50.139101  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:50.160484  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:50.160940  580915 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1007 12:56:50.161152  580915 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1007 12:56:50.161252  580915 retry.go:31] will retry after 265.433396ms: ssh: handshake failed: EOF
	W1007 12:56:50.162980  580915 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1007 12:56:50.163007  580915 retry.go:31] will retry after 180.621907ms: ssh: handshake failed: EOF
	I1007 12:56:50.166387  580915 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1007 12:56:50.171521  580915 out.go:177]   - Using image docker.io/busybox:stable
	I1007 12:56:50.180528  580915 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 12:56:50.180555  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1007 12:56:50.180627  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:50.188620  580915 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 12:56:50.212743  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	W1007 12:56:50.428918  580915 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1007 12:56:50.428948  580915 retry.go:31] will retry after 201.106266ms: ssh: handshake failed: EOF
	I1007 12:56:50.747509  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1007 12:56:50.796631  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 12:56:50.882314  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1007 12:56:50.908493  580915 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1007 12:56:50.908518  580915 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1007 12:56:50.913906  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1007 12:56:50.939321  580915 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 12:56:50.939394  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1007 12:56:50.995992  580915 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1007 12:56:50.996071  580915 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1007 12:56:51.037197  580915 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1007 12:56:51.037227  580915 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1007 12:56:51.040711  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1007 12:56:51.104930  580915 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1007 12:56:51.104961  580915 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1007 12:56:51.107178  580915 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1007 12:56:51.107206  580915 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1007 12:56:51.136562  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1007 12:56:51.139313  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 12:56:51.210559  580915 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1007 12:56:51.210587  580915 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1007 12:56:51.271356  580915 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 12:56:51.271385  580915 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 12:56:51.290822  580915 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1007 12:56:51.290847  580915 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1007 12:56:51.312581  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1007 12:56:51.317698  580915 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1007 12:56:51.317727  580915 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1007 12:56:51.361633  580915 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1007 12:56:51.361730  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1007 12:56:51.386894  580915 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1007 12:56:51.386918  580915 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1007 12:56:51.492167  580915 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1007 12:56:51.492243  580915 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1007 12:56:51.580678  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1007 12:56:51.626897  580915 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1007 12:56:51.626980  580915 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1007 12:56:51.652514  580915 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1007 12:56:51.652588  580915 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1007 12:56:51.737706  580915 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:56:51.737790  580915 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 12:56:51.790031  580915 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1007 12:56:51.790105  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1007 12:56:51.803437  580915 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1007 12:56:51.803514  580915 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1007 12:56:51.883855  580915 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1007 12:56:51.883939  580915 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1007 12:56:51.950665  580915 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 12:56:51.950739  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1007 12:56:51.952814  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1007 12:56:51.955156  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 12:56:52.027084  580915 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1007 12:56:52.027168  580915 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1007 12:56:52.148123  580915 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1007 12:56:52.148149  580915 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1007 12:56:52.173098  580915 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.012081704s)
	I1007 12:56:52.173190  580915 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1007 12:56:52.174439  580915 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.985791964s)
	I1007 12:56:52.175493  580915 node_ready.go:35] waiting up to 6m0s for node "addons-956205" to be "Ready" ...
	I1007 12:56:52.179179  580915 node_ready.go:49] node "addons-956205" has status "Ready":"True"
	I1007 12:56:52.179205  580915 node_ready.go:38] duration metric: took 3.653725ms for node "addons-956205" to be "Ready" ...
	I1007 12:56:52.179215  580915 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:56:52.188375  580915 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-hf8kj" in "kube-system" namespace to be "Ready" ...
	I1007 12:56:52.457497  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 12:56:52.605085  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.857478168s)
	I1007 12:56:52.635686  580915 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1007 12:56:52.635715  580915 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1007 12:56:52.669074  580915 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1007 12:56:52.669106  580915 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1007 12:56:52.677946  580915 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-956205" context rescaled to 1 replicas
	I1007 12:56:52.797388  580915 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1007 12:56:52.797419  580915 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1007 12:56:52.938895  580915 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1007 12:56:52.938921  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1007 12:56:53.128176  580915 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1007 12:56:53.128203  580915 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1007 12:56:53.224353  580915 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1007 12:56:53.224383  580915 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1007 12:56:53.241913  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.445193526s)
	I1007 12:56:53.241988  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.359651248s)
	I1007 12:56:53.388615  580915 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 12:56:53.388643  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1007 12:56:53.559573  580915 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1007 12:56:53.559603  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1007 12:56:53.767611  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1007 12:56:53.959311  580915 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1007 12:56:53.959391  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1007 12:56:54.237809  580915 pod_ready.go:103] pod "coredns-7c65d6cfc9-hf8kj" in "kube-system" namespace has status "Ready":"False"
	I1007 12:56:54.634072  580915 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 12:56:54.634148  580915 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1007 12:56:54.818209  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.904218281s)
	I1007 12:56:55.024073  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1007 12:56:56.758106  580915 pod_ready.go:103] pod "coredns-7c65d6cfc9-hf8kj" in "kube-system" namespace has status "Ready":"False"
	I1007 12:56:57.057269  580915 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1007 12:56:57.057361  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:57.081833  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:57.638053  580915 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1007 12:56:57.870006  580915 addons.go:234] Setting addon gcp-auth=true in "addons-956205"
	I1007 12:56:57.870062  580915 host.go:66] Checking if "addons-956205" exists ...
	I1007 12:56:57.870513  580915 cli_runner.go:164] Run: docker container inspect addons-956205 --format={{.State.Status}}
	I1007 12:56:57.895161  580915 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1007 12:56:57.895221  580915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-956205
	I1007 12:56:57.920281  580915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33504 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/addons-956205/id_rsa Username:docker}
	I1007 12:56:59.219860  580915 pod_ready.go:103] pod "coredns-7c65d6cfc9-hf8kj" in "kube-system" namespace has status "Ready":"False"
	I1007 12:57:00.384851  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.248244815s)
	I1007 12:57:00.384889  580915 addons.go:475] Verifying addon ingress=true in "addons-956205"
	I1007 12:57:00.385042  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.344299673s)
	I1007 12:57:00.385077  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.245744893s)
	I1007 12:57:00.390084  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.077469524s)
	I1007 12:57:00.390134  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.80938781s)
	I1007 12:57:00.390146  580915 addons.go:475] Verifying addon registry=true in "addons-956205"
	I1007 12:57:00.391691  580915 out.go:177] * Verifying ingress addon...
	I1007 12:57:00.394871  580915 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1007 12:57:00.400506  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.447606768s)
	I1007 12:57:00.400908  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.445673991s)
	I1007 12:57:00.400937  580915 addons.go:475] Verifying addon metrics-server=true in "addons-956205"
	I1007 12:57:00.401023  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.943496777s)
	W1007 12:57:00.401044  580915 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 12:57:00.401079  580915 retry.go:31] will retry after 336.132068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1007 12:57:00.401165  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.633469759s)
	I1007 12:57:00.406132  580915 out.go:177] * Verifying registry addon...
	I1007 12:57:00.410362  580915 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-956205 service yakd-dashboard -n yakd-dashboard
	
	I1007 12:57:00.411715  580915 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1007 12:57:00.492316  580915 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1007 12:57:00.492350  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:00.493963  580915 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1007 12:57:00.493994  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:00.737511  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1007 12:57:00.901655  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:01.001224  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:01.333476  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.309300644s)
	I1007 12:57:01.333519  580915 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-956205"
	I1007 12:57:01.333695  580915 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.438509682s)
	I1007 12:57:01.335397  580915 out.go:177] * Verifying csi-hostpath-driver addon...
	I1007 12:57:01.335472  580915 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1007 12:57:01.338599  580915 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1007 12:57:01.340743  580915 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1007 12:57:01.342332  580915 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1007 12:57:01.342371  580915 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1007 12:57:01.350547  580915 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1007 12:57:01.350621  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:01.398974  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:01.402584  580915 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1007 12:57:01.402612  580915 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1007 12:57:01.425765  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:01.428021  580915 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 12:57:01.428047  580915 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1007 12:57:01.449518  580915 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1007 12:57:01.694150  580915 pod_ready.go:103] pod "coredns-7c65d6cfc9-hf8kj" in "kube-system" namespace has status "Ready":"False"
	I1007 12:57:01.854168  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:01.906606  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:01.948249  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:02.345242  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:02.399869  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:02.426959  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:02.711023  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.9734407s)
	I1007 12:57:02.882610  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:02.909654  580915 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.460095626s)
	I1007 12:57:02.913138  580915 addons.go:475] Verifying addon gcp-auth=true in "addons-956205"
	I1007 12:57:02.915255  580915 out.go:177] * Verifying gcp-auth addon...
	I1007 12:57:02.918328  580915 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1007 12:57:02.969471  580915 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 12:57:02.970489  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:02.971005  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:03.344101  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:03.444840  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:03.445392  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:03.695464  580915 pod_ready.go:103] pod "coredns-7c65d6cfc9-hf8kj" in "kube-system" namespace has status "Ready":"False"
	I1007 12:57:03.845344  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:03.899630  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:03.944568  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:04.197256  580915 pod_ready.go:93] pod "coredns-7c65d6cfc9-hf8kj" in "kube-system" namespace has status "Ready":"True"
	I1007 12:57:04.197291  580915 pod_ready.go:82] duration metric: took 12.00887236s for pod "coredns-7c65d6cfc9-hf8kj" in "kube-system" namespace to be "Ready" ...
	I1007 12:57:04.197304  580915 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-sd46r" in "kube-system" namespace to be "Ready" ...
	I1007 12:57:04.200215  580915 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-sd46r" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-sd46r" not found
	I1007 12:57:04.200246  580915 pod_ready.go:82] duration metric: took 2.934032ms for pod "coredns-7c65d6cfc9-sd46r" in "kube-system" namespace to be "Ready" ...
	E1007 12:57:04.200257  580915 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-sd46r" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-sd46r" not found
	I1007 12:57:04.200265  580915 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-956205" in "kube-system" namespace to be "Ready" ...
	I1007 12:57:04.211168  580915 pod_ready.go:93] pod "etcd-addons-956205" in "kube-system" namespace has status "Ready":"True"
	I1007 12:57:04.211194  580915 pod_ready.go:82] duration metric: took 10.92201ms for pod "etcd-addons-956205" in "kube-system" namespace to be "Ready" ...
	I1007 12:57:04.211210  580915 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-956205" in "kube-system" namespace to be "Ready" ...
	I1007 12:57:04.218467  580915 pod_ready.go:93] pod "kube-apiserver-addons-956205" in "kube-system" namespace has status "Ready":"True"
	I1007 12:57:04.218495  580915 pod_ready.go:82] duration metric: took 7.27668ms for pod "kube-apiserver-addons-956205" in "kube-system" namespace to be "Ready" ...
	I1007 12:57:04.218507  580915 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-956205" in "kube-system" namespace to be "Ready" ...
	I1007 12:57:04.231007  580915 pod_ready.go:93] pod "kube-controller-manager-addons-956205" in "kube-system" namespace has status "Ready":"True"
	I1007 12:57:04.231034  580915 pod_ready.go:82] duration metric: took 12.518907ms for pod "kube-controller-manager-addons-956205" in "kube-system" namespace to be "Ready" ...
	I1007 12:57:04.231053  580915 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-zl72w" in "kube-system" namespace to be "Ready" ...
	I1007 12:57:04.343865  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:04.393099  580915 pod_ready.go:93] pod "kube-proxy-zl72w" in "kube-system" namespace has status "Ready":"True"
	I1007 12:57:04.393126  580915 pod_ready.go:82] duration metric: took 162.065027ms for pod "kube-proxy-zl72w" in "kube-system" namespace to be "Ready" ...
	I1007 12:57:04.393139  580915 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-956205" in "kube-system" namespace to be "Ready" ...
	I1007 12:57:04.399218  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:04.425295  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:04.793280  580915 pod_ready.go:93] pod "kube-scheduler-addons-956205" in "kube-system" namespace has status "Ready":"True"
	I1007 12:57:04.793306  580915 pod_ready.go:82] duration metric: took 400.159348ms for pod "kube-scheduler-addons-956205" in "kube-system" namespace to be "Ready" ...
	I1007 12:57:04.793316  580915 pod_ready.go:39] duration metric: took 12.614087378s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 12:57:04.793330  580915 api_server.go:52] waiting for apiserver process to appear ...
	I1007 12:57:04.793388  580915 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 12:57:04.810824  580915 api_server.go:72] duration metric: took 15.441795435s to wait for apiserver process to appear ...
	I1007 12:57:04.810854  580915 api_server.go:88] waiting for apiserver healthz status ...
	I1007 12:57:04.810876  580915 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1007 12:57:04.818764  580915 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1007 12:57:04.819770  580915 api_server.go:141] control plane version: v1.31.1
	I1007 12:57:04.819796  580915 api_server.go:131] duration metric: took 8.935696ms to wait for apiserver health ...
	I1007 12:57:04.819805  580915 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 12:57:04.843503  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:04.902108  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:04.925414  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:05.002356  580915 system_pods.go:59] 18 kube-system pods found
	I1007 12:57:05.002405  580915 system_pods.go:61] "coredns-7c65d6cfc9-hf8kj" [09d0cec9-5154-4d58-b683-609a2ad27f7c] Running
	I1007 12:57:05.002416  580915 system_pods.go:61] "csi-hostpath-attacher-0" [6451a8ad-1495-4655-b571-72b4575055ff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1007 12:57:05.002424  580915 system_pods.go:61] "csi-hostpath-resizer-0" [dbf262a8-4555-456c-b88b-c1cd812c1b7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1007 12:57:05.002434  580915 system_pods.go:61] "csi-hostpathplugin-g6pv8" [cf33d298-8fbe-444f-8212-f11c97ba8b32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1007 12:57:05.002439  580915 system_pods.go:61] "etcd-addons-956205" [0f70aeb5-afe2-4791-924e-bb632e71780f] Running
	I1007 12:57:05.002445  580915 system_pods.go:61] "kindnet-ppxk9" [075d5707-507e-40c1-a4b7-4a728f6c9451] Running
	I1007 12:57:05.002450  580915 system_pods.go:61] "kube-apiserver-addons-956205" [53e4e638-1997-4359-9ad6-67272cffc4ee] Running
	I1007 12:57:05.002460  580915 system_pods.go:61] "kube-controller-manager-addons-956205" [889edccc-0259-4173-8359-6486109f3feb] Running
	I1007 12:57:05.002467  580915 system_pods.go:61] "kube-ingress-dns-minikube" [f233761d-43cb-41f9-81d0-f52eab1d2b89] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1007 12:57:05.002477  580915 system_pods.go:61] "kube-proxy-zl72w" [62338a59-f097-43c8-a7e1-414d99cb93a5] Running
	I1007 12:57:05.002482  580915 system_pods.go:61] "kube-scheduler-addons-956205" [53389a08-3320-4187-a3e6-9e3aae407c9a] Running
	I1007 12:57:05.002495  580915 system_pods.go:61] "metrics-server-84c5f94fbc-h8njn" [9b51577a-261f-4581-bc76-12b95938c80c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 12:57:05.002503  580915 system_pods.go:61] "nvidia-device-plugin-daemonset-dfwvg" [33c78e7e-3607-4278-ad34-573034aa90cf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1007 12:57:05.002518  580915 system_pods.go:61] "registry-66c9cd494c-5dl6n" [6c6e6f59-720f-486b-99d8-a848c9f81e07] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1007 12:57:05.002534  580915 system_pods.go:61] "registry-proxy-9kpnr" [145ee917-0ec2-4857-8fe0-d759e2d5ec18] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1007 12:57:05.002541  580915 system_pods.go:61] "snapshot-controller-56fcc65765-4dzpn" [c3b8e9d9-8995-48af-9364-f05df187e217] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 12:57:05.002564  580915 system_pods.go:61] "snapshot-controller-56fcc65765-x8mvx" [dde250f3-e326-462e-bf09-293688f9432e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 12:57:05.002573  580915 system_pods.go:61] "storage-provisioner" [f4d76f92-bb46-475d-8dbf-005c02591891] Running
	I1007 12:57:05.002580  580915 system_pods.go:74] duration metric: took 182.76885ms to wait for pod list to return data ...
	I1007 12:57:05.002593  580915 default_sa.go:34] waiting for default service account to be created ...
	I1007 12:57:05.200055  580915 default_sa.go:45] found service account: "default"
	I1007 12:57:05.200088  580915 default_sa.go:55] duration metric: took 197.48786ms for default service account to be created ...
	I1007 12:57:05.200100  580915 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 12:57:05.344755  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:05.407632  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:05.415818  580915 system_pods.go:86] 18 kube-system pods found
	I1007 12:57:05.415851  580915 system_pods.go:89] "coredns-7c65d6cfc9-hf8kj" [09d0cec9-5154-4d58-b683-609a2ad27f7c] Running
	I1007 12:57:05.415863  580915 system_pods.go:89] "csi-hostpath-attacher-0" [6451a8ad-1495-4655-b571-72b4575055ff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1007 12:57:05.415871  580915 system_pods.go:89] "csi-hostpath-resizer-0" [dbf262a8-4555-456c-b88b-c1cd812c1b7e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1007 12:57:05.415881  580915 system_pods.go:89] "csi-hostpathplugin-g6pv8" [cf33d298-8fbe-444f-8212-f11c97ba8b32] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1007 12:57:05.415886  580915 system_pods.go:89] "etcd-addons-956205" [0f70aeb5-afe2-4791-924e-bb632e71780f] Running
	I1007 12:57:05.415891  580915 system_pods.go:89] "kindnet-ppxk9" [075d5707-507e-40c1-a4b7-4a728f6c9451] Running
	I1007 12:57:05.415896  580915 system_pods.go:89] "kube-apiserver-addons-956205" [53e4e638-1997-4359-9ad6-67272cffc4ee] Running
	I1007 12:57:05.415901  580915 system_pods.go:89] "kube-controller-manager-addons-956205" [889edccc-0259-4173-8359-6486109f3feb] Running
	I1007 12:57:05.415909  580915 system_pods.go:89] "kube-ingress-dns-minikube" [f233761d-43cb-41f9-81d0-f52eab1d2b89] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1007 12:57:05.415913  580915 system_pods.go:89] "kube-proxy-zl72w" [62338a59-f097-43c8-a7e1-414d99cb93a5] Running
	I1007 12:57:05.415917  580915 system_pods.go:89] "kube-scheduler-addons-956205" [53389a08-3320-4187-a3e6-9e3aae407c9a] Running
	I1007 12:57:05.415924  580915 system_pods.go:89] "metrics-server-84c5f94fbc-h8njn" [9b51577a-261f-4581-bc76-12b95938c80c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 12:57:05.415932  580915 system_pods.go:89] "nvidia-device-plugin-daemonset-dfwvg" [33c78e7e-3607-4278-ad34-573034aa90cf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1007 12:57:05.415938  580915 system_pods.go:89] "registry-66c9cd494c-5dl6n" [6c6e6f59-720f-486b-99d8-a848c9f81e07] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1007 12:57:05.415944  580915 system_pods.go:89] "registry-proxy-9kpnr" [145ee917-0ec2-4857-8fe0-d759e2d5ec18] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1007 12:57:05.415950  580915 system_pods.go:89] "snapshot-controller-56fcc65765-4dzpn" [c3b8e9d9-8995-48af-9364-f05df187e217] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 12:57:05.415958  580915 system_pods.go:89] "snapshot-controller-56fcc65765-x8mvx" [dde250f3-e326-462e-bf09-293688f9432e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1007 12:57:05.415965  580915 system_pods.go:89] "storage-provisioner" [f4d76f92-bb46-475d-8dbf-005c02591891] Running
	I1007 12:57:05.415974  580915 system_pods.go:126] duration metric: took 215.867832ms to wait for k8s-apps to be running ...
	I1007 12:57:05.415982  580915 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 12:57:05.416040  580915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 12:57:05.428441  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:05.437201  580915 system_svc.go:56] duration metric: took 21.207847ms WaitForService to wait for kubelet
	I1007 12:57:05.437284  580915 kubeadm.go:582] duration metric: took 16.068277302s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 12:57:05.437318  580915 node_conditions.go:102] verifying NodePressure condition ...
	I1007 12:57:05.595203  580915 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 12:57:05.595286  580915 node_conditions.go:123] node cpu capacity is 2
	I1007 12:57:05.595322  580915 node_conditions.go:105] duration metric: took 157.982272ms to run NodePressure ...
	I1007 12:57:05.595374  580915 start.go:241] waiting for startup goroutines ...
	I1007 12:57:05.851002  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:05.901252  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:05.929413  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:06.344759  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:06.400331  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:06.426399  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:06.844901  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:06.899478  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:06.926061  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:07.343963  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:07.399500  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:07.425909  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:07.844046  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:07.898933  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:07.926208  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:08.343751  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:08.401511  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:08.427514  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:08.843678  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:08.899693  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:08.925843  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:09.343519  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:09.444321  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:09.445953  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:09.843997  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:09.899195  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:09.925772  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:10.344999  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:10.400326  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:10.426305  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:10.844206  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:10.899577  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:10.925993  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:11.347509  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:11.399825  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:11.425570  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:11.846637  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:11.905153  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:12.005291  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:12.366781  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:12.402122  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:12.429395  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:12.843476  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:12.900849  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:12.925241  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:13.344437  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:13.444714  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:13.446058  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:13.844553  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:13.899891  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:13.926262  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:14.343061  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:14.399588  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:14.425976  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:14.844828  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:14.900769  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:14.928153  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:15.344478  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:15.446328  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:15.447305  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:15.843541  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:15.903970  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:15.926263  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:16.343630  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:16.401088  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:16.501488  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:16.851205  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:16.902238  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:16.927810  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:17.344495  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:17.400808  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:17.427422  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:17.847675  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:17.945289  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:17.945957  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:18.344764  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:18.399546  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:18.426713  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:18.878656  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:18.915887  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:18.937206  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:19.348771  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:19.401448  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:19.428262  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:19.852675  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:19.902118  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:19.927827  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:20.343933  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:20.399441  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:20.426239  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:20.843470  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:20.900217  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:20.928192  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:21.343701  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:21.399611  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:21.426349  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:21.843553  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:21.899817  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:21.925563  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:22.344076  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:22.401589  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:22.426326  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:22.843997  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:22.899560  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:22.925723  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:23.344357  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:23.444356  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:23.445447  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:23.843345  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:23.899643  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:23.925299  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:24.342923  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:24.400136  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:24.426017  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:24.844540  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:24.899943  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:24.926292  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:25.344736  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:25.398842  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:25.425757  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1007 12:57:25.844259  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:25.944171  580915 kapi.go:107] duration metric: took 25.53245814s to wait for kubernetes.io/minikube-addons=registry ...
	I1007 12:57:25.946347  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:26.342838  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:26.399739  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:26.843201  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:26.899965  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:27.344388  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:27.399207  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:27.847606  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:27.928903  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:28.346631  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:28.401721  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:28.849113  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:28.903280  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:29.354444  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:29.400090  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:29.844505  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:29.915463  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:30.344667  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:30.400372  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:30.844583  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:30.946310  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:31.343968  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:31.399634  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:31.844123  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:31.899880  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:32.344515  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:32.400375  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:32.844469  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:32.899586  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:33.344397  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:33.444786  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:33.843975  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:33.945564  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:34.343479  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:34.399829  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:34.844257  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:34.898934  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:35.343806  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:35.398700  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:35.843051  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:35.900041  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:36.346867  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:36.399912  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:36.843445  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:36.899757  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:37.344202  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:37.453098  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:37.844826  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:37.899460  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:38.344791  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:38.398719  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:38.844512  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:38.901402  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:39.345262  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:39.402185  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:39.845906  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:39.900872  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:40.354896  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:40.399383  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:40.844670  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:40.900391  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:41.343654  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:41.399853  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:41.844490  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:41.900228  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:42.345156  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:42.399562  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:42.842901  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:42.899431  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:43.343428  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:43.444668  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:43.843229  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:43.899983  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:44.343541  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:44.399798  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:44.843865  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:44.899905  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:45.346457  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:45.400182  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:45.843260  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:45.928176  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:46.345144  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:46.399282  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:46.843842  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:46.898940  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:47.345026  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:47.399290  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:47.845391  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:47.902136  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:48.343036  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:48.400371  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:48.843698  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:48.899719  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:49.344689  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:49.399044  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:49.844289  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1007 12:57:49.899539  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:50.354178  580915 kapi.go:107] duration metric: took 49.015566358s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1007 12:57:50.399951  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:50.899397  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:51.400274  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:51.899370  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:52.399822  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:52.900089  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:53.399389  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:53.900656  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:54.400342  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:54.899525  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:55.399636  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:55.899727  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:56.400414  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:56.899907  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:57.399573  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:57.898859  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:58.399583  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:58.899517  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:59.400005  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:57:59.899662  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:00.402280  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:00.900065  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:01.400373  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:01.901092  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:02.399970  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:02.900905  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:03.399995  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:03.901122  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:04.400241  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:04.901959  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:05.399722  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:05.900323  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:06.399558  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:06.900953  580915 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1007 12:58:07.400298  580915 kapi.go:107] duration metric: took 1m7.005426517s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1007 12:58:24.927707  580915 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1007 12:58:24.927731  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:25.422755  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:25.923458  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:26.422121  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:26.921889  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:27.421698  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:27.921776  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:28.421459  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:28.922682  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:29.421884  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:29.922147  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:30.421697  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:30.921795  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:31.422888  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:31.922491  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:32.422382  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:32.921975  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:33.423138  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:33.922992  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:34.421951  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:34.921825  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:35.421757  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:35.922893  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:36.423299  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:36.922131  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:37.422648  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:37.922760  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:38.424902  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:38.922246  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:39.421739  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:39.923271  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:40.424001  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:40.922108  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:41.422774  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:41.923005  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:42.425769  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:42.921722  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:43.423198  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:43.922570  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:44.422483  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:44.921798  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:45.423315  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:45.922302  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:46.422037  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:46.922419  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:47.423262  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:47.922506  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:48.424736  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:48.922480  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:49.421957  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:49.923157  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:50.421618  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:50.923278  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:51.423088  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:51.922105  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:52.423171  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:52.922374  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:53.422010  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:53.921573  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:54.427563  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:54.921774  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:55.421553  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:55.922626  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:56.422356  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:56.922910  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:57.423455  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:57.923025  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:58.421894  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:58.921867  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:59.422171  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:58:59.921553  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:00.427668  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:00.922602  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:01.422972  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:01.921973  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:02.425383  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:02.921377  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:03.422696  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:03.923186  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:04.422349  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:04.922125  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:05.421940  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:05.928776  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:06.422818  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:06.922618  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:07.422987  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:07.922424  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:08.423456  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:08.922366  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:09.422183  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:09.922005  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:10.422514  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:10.922032  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:11.422351  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:11.923046  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:12.421849  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:12.921830  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:13.422154  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:13.922088  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:14.421427  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:14.922458  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:15.421915  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:15.922347  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:16.424936  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:16.921807  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:17.422408  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:17.922489  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:18.422297  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:18.922428  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:19.422506  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:19.922742  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:20.424670  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:20.921635  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:21.422069  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:21.922022  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:22.422584  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:22.921963  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:23.422101  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:23.922158  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:24.422276  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:24.922487  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:25.422357  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:25.922496  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:26.422442  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:26.922030  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:27.422036  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:27.922287  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:28.423759  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:28.922449  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:29.422302  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:29.922727  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:30.424862  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:30.921864  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:31.422295  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:31.923071  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:32.432525  580915 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1007 12:59:32.922250  580915 kapi.go:107] duration metric: took 2m30.003918226s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1007 12:59:32.924313  580915 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-956205 cluster.
	I1007 12:59:32.926619  580915 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1007 12:59:32.928515  580915 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1007 12:59:32.930351  580915 out.go:177] * Enabled addons: nvidia-device-plugin, storage-provisioner, cloud-spanner, storage-provisioner-rancher, volcano, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1007 12:59:32.932388  580915 addons.go:510] duration metric: took 2m43.562785346s for enable addons: enabled=[nvidia-device-plugin storage-provisioner cloud-spanner storage-provisioner-rancher volcano ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1007 12:59:32.932452  580915 start.go:246] waiting for cluster config update ...
	I1007 12:59:32.932497  580915 start.go:255] writing updated cluster config ...
	I1007 12:59:32.932826  580915 ssh_runner.go:195] Run: rm -f paused
	I1007 12:59:33.328851  580915 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 12:59:33.331145  580915 out.go:177] * Done! kubectl is now configured to use "addons-956205" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	b3f8d782b0725       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   d82274f50f48b       gcp-auth-89d5ffd79-4qqmt
	ed63208ed1c8c       1a9605c872c1d       4 minutes ago       Running             admission                                0                   0e344eaa0528c       volcano-admission-5874dfdd79-58pv9
	f4d1479b746a8       289a818c8d9c5       4 minutes ago       Running             controller                               0                   3bf20802b9e58       ingress-nginx-controller-bc57996ff-x8bvq
	2596102a31abb       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   ee6b014ad8670       csi-hostpathplugin-g6pv8
	5687ad2016e1f       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   ee6b014ad8670       csi-hostpathplugin-g6pv8
	46676a5df4a8d       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   ee6b014ad8670       csi-hostpathplugin-g6pv8
	a6f79aafff1e7       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   ee6b014ad8670       csi-hostpathplugin-g6pv8
	c2c2e5dc5a040       23cbb28ae641a       5 minutes ago       Running             volcano-controllers                      0                   27a112867941a       volcano-controllers-789ffc5785-k4lc4
	a9aeed83e3f69       6aa88c604f2b4       5 minutes ago       Running             volcano-scheduler                        0                   4fed9d8e9ec98       volcano-scheduler-6c9778cbdf-f5pr5
	84c2b91b95004       420193b27261a       5 minutes ago       Exited              patch                                    0                   5c73fea1ed9de       ingress-nginx-admission-patch-db6h9
	69ff29d86b9b3       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   ee6b014ad8670       csi-hostpathplugin-g6pv8
	8cb211dc816b5       420193b27261a       5 minutes ago       Exited              create                                   0                   8c5d2ce088916       ingress-nginx-admission-create-hzhdp
	8690eb00b6e3f       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   17ef6309afdae       snapshot-controller-56fcc65765-4dzpn
	167adfeb7e675       77bdba588b953       5 minutes ago       Running             yakd                                     0                   b940c9fab0676       yakd-dashboard-67d98fc6b-45vd4
	4b545455b6e2b       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   6be475ede7478       snapshot-controller-56fcc65765-x8mvx
	5e674204d0486       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   ec80b1bbc340c       local-path-provisioner-86d989889c-bjmdm
	6b80e19328314       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   81e38b1bd4338       metrics-server-84c5f94fbc-h8njn
	73f41340ac9c7       f7ed138f698f6       5 minutes ago       Running             registry-proxy                           0                   683871bb04b29       registry-proxy-9kpnr
	412a63d4ad2fb       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   494fde8eaa12f       nvidia-device-plugin-daemonset-dfwvg
	096ac91d245b3       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   66abedbaa9727       registry-66c9cd494c-5dl6n
	a803375cd7d6f       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   f48e72a5f94d0       cloud-spanner-emulator-5b584cc74-rpdgz
	4fef979614ec1       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   ee6b014ad8670       csi-hostpathplugin-g6pv8
	14824c5fbc9de       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   0ccf80b14d688       csi-hostpath-resizer-0
	909161101a43b       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   3bd6f383592fe       csi-hostpath-attacher-0
	7f4d24ba11fa1       4f725bf50aaa5       5 minutes ago       Running             gadget                                   0                   407fef65085c7       gadget-sqb4m
	f3aa13f552f90       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   9017c832df851       kube-ingress-dns-minikube
	a7daf88b5b7a7       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   7911cf07b40cb       coredns-7c65d6cfc9-hf8kj
	e59907e6fdfad       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   8fb8d22a72435       storage-provisioner
	015e748e7c509       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   6bb8b9565da37       kindnet-ppxk9
	cb0f2e879e334       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   d9d5d3318bbf3       kube-proxy-zl72w
	090f80f51bd20       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   3fde6bb2265d6       kube-apiserver-addons-956205
	0b449638491ba       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   a1d8ead24bbf2       kube-scheduler-addons-956205
	fe07fdb287277       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   b4603f788bf91       kube-controller-manager-addons-956205
	fb37daafe65f8       27e3830e14027       6 minutes ago       Running             etcd                                     0                   6cd277bb3b25b       etcd-addons-956205
	
	
	==> containerd <==
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.443948969Z" level=info msg="TearDown network for sandbox \"f3361c77a25d8c872a7e257c09a7c18bf7d642c14d6e3b5149978d9a1742391d\" successfully"
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.443988263Z" level=info msg="StopPodSandbox for \"f3361c77a25d8c872a7e257c09a7c18bf7d642c14d6e3b5149978d9a1742391d\" returns successfully"
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.444863498Z" level=info msg="RemovePodSandbox for \"f3361c77a25d8c872a7e257c09a7c18bf7d642c14d6e3b5149978d9a1742391d\""
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.444911013Z" level=info msg="Forcibly stopping sandbox \"f3361c77a25d8c872a7e257c09a7c18bf7d642c14d6e3b5149978d9a1742391d\""
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.454240476Z" level=info msg="TearDown network for sandbox \"f3361c77a25d8c872a7e257c09a7c18bf7d642c14d6e3b5149978d9a1742391d\" successfully"
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.468570021Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3361c77a25d8c872a7e257c09a7c18bf7d642c14d6e3b5149978d9a1742391d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.469051785Z" level=info msg="RemovePodSandbox \"f3361c77a25d8c872a7e257c09a7c18bf7d642c14d6e3b5149978d9a1742391d\" returns successfully"
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.475336588Z" level=info msg="StopPodSandbox for \"bd6ae7c9427fa2336497dc09667d0f37adbdcc455141516ed400a054bb1ff4ec\""
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.489546964Z" level=info msg="TearDown network for sandbox \"bd6ae7c9427fa2336497dc09667d0f37adbdcc455141516ed400a054bb1ff4ec\" successfully"
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.489597654Z" level=info msg="StopPodSandbox for \"bd6ae7c9427fa2336497dc09667d0f37adbdcc455141516ed400a054bb1ff4ec\" returns successfully"
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.490182375Z" level=info msg="RemovePodSandbox for \"bd6ae7c9427fa2336497dc09667d0f37adbdcc455141516ed400a054bb1ff4ec\""
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.490222940Z" level=info msg="Forcibly stopping sandbox \"bd6ae7c9427fa2336497dc09667d0f37adbdcc455141516ed400a054bb1ff4ec\""
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.499584099Z" level=info msg="TearDown network for sandbox \"bd6ae7c9427fa2336497dc09667d0f37adbdcc455141516ed400a054bb1ff4ec\" successfully"
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.506257573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd6ae7c9427fa2336497dc09667d0f37adbdcc455141516ed400a054bb1ff4ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 07 12:59:44 addons-956205 containerd[814]: time="2024-10-07T12:59:44.506376923Z" level=info msg="RemovePodSandbox \"bd6ae7c9427fa2336497dc09667d0f37adbdcc455141516ed400a054bb1ff4ec\" returns successfully"
	Oct 07 13:00:44 addons-956205 containerd[814]: time="2024-10-07T13:00:44.510462556Z" level=info msg="RemoveContainer for \"5fe71631974eb1629b94090a1ec4adec5607847f6413c7593e8ab95e96b8c26d\""
	Oct 07 13:00:44 addons-956205 containerd[814]: time="2024-10-07T13:00:44.516930561Z" level=info msg="RemoveContainer for \"5fe71631974eb1629b94090a1ec4adec5607847f6413c7593e8ab95e96b8c26d\" returns successfully"
	Oct 07 13:00:44 addons-956205 containerd[814]: time="2024-10-07T13:00:44.518690606Z" level=info msg="StopPodSandbox for \"afd845d4a20d205fc76cb47a0cbf8132c47f2f3511e42ea2328100455b67aad6\""
	Oct 07 13:00:44 addons-956205 containerd[814]: time="2024-10-07T13:00:44.528204402Z" level=info msg="TearDown network for sandbox \"afd845d4a20d205fc76cb47a0cbf8132c47f2f3511e42ea2328100455b67aad6\" successfully"
	Oct 07 13:00:44 addons-956205 containerd[814]: time="2024-10-07T13:00:44.528382098Z" level=info msg="StopPodSandbox for \"afd845d4a20d205fc76cb47a0cbf8132c47f2f3511e42ea2328100455b67aad6\" returns successfully"
	Oct 07 13:00:44 addons-956205 containerd[814]: time="2024-10-07T13:00:44.528958687Z" level=info msg="RemovePodSandbox for \"afd845d4a20d205fc76cb47a0cbf8132c47f2f3511e42ea2328100455b67aad6\""
	Oct 07 13:00:44 addons-956205 containerd[814]: time="2024-10-07T13:00:44.529003183Z" level=info msg="Forcibly stopping sandbox \"afd845d4a20d205fc76cb47a0cbf8132c47f2f3511e42ea2328100455b67aad6\""
	Oct 07 13:00:44 addons-956205 containerd[814]: time="2024-10-07T13:00:44.549935614Z" level=info msg="TearDown network for sandbox \"afd845d4a20d205fc76cb47a0cbf8132c47f2f3511e42ea2328100455b67aad6\" successfully"
	Oct 07 13:00:44 addons-956205 containerd[814]: time="2024-10-07T13:00:44.556948184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"afd845d4a20d205fc76cb47a0cbf8132c47f2f3511e42ea2328100455b67aad6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 07 13:00:44 addons-956205 containerd[814]: time="2024-10-07T13:00:44.557521286Z" level=info msg="RemovePodSandbox \"afd845d4a20d205fc76cb47a0cbf8132c47f2f3511e42ea2328100455b67aad6\" returns successfully"
	
	
	==> coredns [a7daf88b5b7a71e423ee42938f8a4ad6ec56862bef6be4f15e3753aa2e2dd0ef] <==
	[INFO] 10.244.0.8:55834 - 18038 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000073427s
	[INFO] 10.244.0.8:55834 - 24238 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002895238s
	[INFO] 10.244.0.8:55834 - 17221 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002747983s
	[INFO] 10.244.0.8:55834 - 34292 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000137885s
	[INFO] 10.244.0.8:55834 - 50098 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000109061s
	[INFO] 10.244.0.8:53883 - 39082 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00010175s
	[INFO] 10.244.0.8:53883 - 39328 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000041419s
	[INFO] 10.244.0.8:52841 - 838 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000060955s
	[INFO] 10.244.0.8:52841 - 1093 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000036955s
	[INFO] 10.244.0.8:58701 - 59031 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043175s
	[INFO] 10.244.0.8:58701 - 59203 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000046932s
	[INFO] 10.244.0.8:43434 - 42054 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002219672s
	[INFO] 10.244.0.8:43434 - 42259 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001034296s
	[INFO] 10.244.0.8:32935 - 13864 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000082066s
	[INFO] 10.244.0.8:32935 - 14292 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00005691s
	[INFO] 10.244.0.24:37714 - 45177 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000168186s
	[INFO] 10.244.0.24:50779 - 23865 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000128113s
	[INFO] 10.244.0.24:60395 - 15000 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000154335s
	[INFO] 10.244.0.24:43637 - 9869 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000075691s
	[INFO] 10.244.0.24:49185 - 15197 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000133972s
	[INFO] 10.244.0.24:40330 - 35214 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000077611s
	[INFO] 10.244.0.24:60767 - 31256 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002262798s
	[INFO] 10.244.0.24:60923 - 59472 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001846731s
	[INFO] 10.244.0.24:37322 - 32585 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001220944s
	[INFO] 10.244.0.24:46466 - 13194 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001757296s
	
	
	==> describe nodes <==
	Name:               addons-956205
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-956205
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=addons-956205
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T12_56_45_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-956205
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-956205"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 12:56:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-956205
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:02:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 12:59:48 +0000   Mon, 07 Oct 2024 12:56:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 12:59:48 +0000   Mon, 07 Oct 2024 12:56:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 12:59:48 +0000   Mon, 07 Oct 2024 12:56:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 12:59:48 +0000   Mon, 07 Oct 2024 12:56:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-956205
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 0b3a297c419e4561baa528cd753eac6c
	  System UUID:                9a90028e-5baa-4bdd-8b76-23472c738cb9
	  Boot ID:                    21f414e1-c967-4988-b7c1-53380c0b20c8
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-rpdgz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  gadget                      gadget-sqb4m                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  gcp-auth                    gcp-auth-89d5ffd79-4qqmt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-x8bvq    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-hf8kj                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpathplugin-g6pv8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 etcd-addons-956205                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m8s
	  kube-system                 kindnet-ppxk9                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-956205                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-956205       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-proxy-zl72w                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-956205                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 metrics-server-84c5f94fbc-h8njn             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-dfwvg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-66c9cd494c-5dl6n                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-proxy-9kpnr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-4dzpn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-x8mvx        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  local-path-storage          local-path-provisioner-86d989889c-bjmdm     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  volcano-system              volcano-admission-5874dfdd79-58pv9          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-789ffc5785-k4lc4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-6c9778cbdf-f5pr5          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-45vd4              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m1s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 6m15s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m15s (x8 over 6m15s)  kubelet          Node addons-956205 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m15s (x7 over 6m15s)  kubelet          Node addons-956205 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m15s (x7 over 6m15s)  kubelet          Node addons-956205 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m15s                  kubelet          Starting kubelet.
	  Normal   Starting                 6m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s                   kubelet          Node addons-956205 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s                   kubelet          Node addons-956205 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s                   kubelet          Node addons-956205 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m4s                   node-controller  Node addons-956205 event: Registered Node addons-956205 in Controller
	
	
	==> dmesg <==
	[Oct 7 12:30] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[  +0.109813] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	
	
	==> etcd [fb37daafe65f84f9c009553dc27d8b8c341972a9d1509aee426c055c785a84c5] <==
	{"level":"info","ts":"2024-10-07T12:56:38.200206Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-07T12:56:38.200377Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-07T12:56:38.200397Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-07T12:56:38.202556Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-07T12:56:38.202596Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-07T12:56:38.269756Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-07T12:56:38.270027Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-07T12:56:38.270168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-10-07T12:56:38.270262Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-10-07T12:56:38.270361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-07T12:56:38.270448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-10-07T12:56:38.270541Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-07T12:56:38.273850Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:56:38.277950Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-956205 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-07T12:56:38.278131Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T12:56:38.278764Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-07T12:56:38.279727Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T12:56:38.286204Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:56:38.286686Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:56:38.289870Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-07T12:56:38.295774Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-07T12:56:38.295877Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-07T12:56:38.293957Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-07T12:56:38.290651Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-07T12:56:38.307281Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [b3f8d782b072514b5649088832762dad776451616a0c9ebf673f6d179769ae6f] <==
	2024/10/07 12:59:32 GCP Auth Webhook started!
	2024/10/07 12:59:49 Ready to marshal response ...
	2024/10/07 12:59:49 Ready to write response ...
	2024/10/07 12:59:50 Ready to marshal response ...
	2024/10/07 12:59:50 Ready to write response ...
	
	
	==> kernel <==
	 13:02:52 up  2:45,  0 users,  load average: 0.19, 1.20, 1.82
	Linux addons-956205 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [015e748e7c509c4902d09be368ce7bcd03c84304e1546430c9c483b357efc75b] <==
	I1007 13:00:51.318715       1 main.go:299] handling current node
	I1007 13:01:01.326337       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:01:01.326383       1 main.go:299] handling current node
	I1007 13:01:11.328237       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:01:11.328274       1 main.go:299] handling current node
	I1007 13:01:21.323296       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:01:21.323334       1 main.go:299] handling current node
	I1007 13:01:31.325010       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:01:31.325046       1 main.go:299] handling current node
	I1007 13:01:41.318879       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:01:41.319096       1 main.go:299] handling current node
	I1007 13:01:51.318668       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:01:51.318705       1 main.go:299] handling current node
	I1007 13:02:01.323465       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:02:01.323504       1 main.go:299] handling current node
	I1007 13:02:11.325964       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:02:11.325999       1 main.go:299] handling current node
	I1007 13:02:21.319440       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:02:21.319478       1 main.go:299] handling current node
	I1007 13:02:31.323464       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:02:31.323499       1 main.go:299] handling current node
	I1007 13:02:41.328524       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:02:41.328559       1 main.go:299] handling current node
	I1007 13:02:51.319475       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1007 13:02:51.319538       1 main.go:299] handling current node
	
	
	==> kube-apiserver [090f80f51bd20981c42c61c056a790a2b83fbae311cbe03a348af635e271ccb8] <==
	W1007 12:58:03.210098       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:04.221082       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:05.299589       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:05.863202       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.142.181:443: connect: connection refused
	E1007 12:58:05.863246       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.142.181:443: connect: connection refused" logger="UnhandledError"
	W1007 12:58:05.864835       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:05.957793       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.142.181:443: connect: connection refused
	E1007 12:58:05.957830       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.142.181:443: connect: connection refused" logger="UnhandledError"
	W1007 12:58:05.959393       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:06.355005       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:07.413404       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:08.459464       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:09.493513       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:10.519078       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:11.552439       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:12.601273       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:13.690356       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.249.123:443: connect: connection refused
	W1007 12:58:24.867333       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.142.181:443: connect: connection refused
	E1007 12:58:24.867372       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.142.181:443: connect: connection refused" logger="UnhandledError"
	W1007 12:59:05.873331       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.142.181:443: connect: connection refused
	E1007 12:59:05.873369       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.142.181:443: connect: connection refused" logger="UnhandledError"
	W1007 12:59:05.965040       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.142.181:443: connect: connection refused
	E1007 12:59:05.965081       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.142.181:443: connect: connection refused" logger="UnhandledError"
	I1007 12:59:49.865402       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I1007 12:59:49.902051       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [fe07fdb2872779ff5450f3e1c10bf2009382c327a8185aa7e5a63d37a26e3397] <==
	I1007 12:59:05.908369       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 12:59:05.909128       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 12:59:05.927020       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 12:59:05.973986       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 12:59:05.979653       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 12:59:05.989783       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 12:59:06.004051       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 12:59:07.465160       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 12:59:07.477176       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 12:59:08.583957       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 12:59:08.606660       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 12:59:09.592878       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 12:59:09.601237       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 12:59:09.609765       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1007 12:59:09.616085       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 12:59:09.624129       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 12:59:09.631254       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1007 12:59:32.588770       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="18.584662ms"
	I1007 12:59:32.589023       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="51.134µs"
	I1007 12:59:39.024290       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1007 12:59:39.028532       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1007 12:59:39.086299       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1007 12:59:39.089734       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1007 12:59:48.158845       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-956205"
	I1007 12:59:49.575753       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [cb0f2e879e33439ff894a3153797cfc83a4f4986fc263c603b5cacfa87164d55] <==
	I1007 12:56:50.671934       1 server_linux.go:66] "Using iptables proxy"
	I1007 12:56:50.778306       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1007 12:56:50.778385       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1007 12:56:50.807924       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1007 12:56:50.807977       1 server_linux.go:169] "Using iptables Proxier"
	I1007 12:56:50.810211       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1007 12:56:50.810707       1 server.go:483] "Version info" version="v1.31.1"
	I1007 12:56:50.810726       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1007 12:56:50.822409       1 config.go:199] "Starting service config controller"
	I1007 12:56:50.822444       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1007 12:56:50.822491       1 config.go:105] "Starting endpoint slice config controller"
	I1007 12:56:50.822495       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1007 12:56:50.834950       1 config.go:328] "Starting node config controller"
	I1007 12:56:50.834993       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1007 12:56:50.923522       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1007 12:56:50.923578       1 shared_informer.go:320] Caches are synced for service config
	I1007 12:56:50.935506       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [0b449638491baf87e94ac2e4a711e5af1a66d76f135503ee60e5667ecd991a6d] <==
	W1007 12:56:41.936394       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 12:56:41.938375       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:56:41.936521       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1007 12:56:41.938408       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:56:41.936578       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 12:56:41.938433       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:56:41.936730       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 12:56:41.938456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:56:41.936774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 12:56:41.938487       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:56:41.943976       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1007 12:56:41.944021       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:56:42.861911       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 12:56:42.862045       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1007 12:56:42.913588       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 12:56:42.913631       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:56:42.964182       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 12:56:42.964225       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:56:43.000152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1007 12:56:43.000208       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1007 12:56:43.034995       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 12:56:43.035329       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1007 12:56:43.039708       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 12:56:43.039801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1007 12:56:43.624480       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 07 12:59:06 addons-956205 kubelet[1491]: I1007 12:59:06.174788    1491 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkv2c\" (UniqueName: \"kubernetes.io/projected/9b163b5a-2789-4999-8647-6504c5ae56ef-kube-api-access-kkv2c\") pod \"gcp-auth-certs-patch-mz2l9\" (UID: \"9b163b5a-2789-4999-8647-6504c5ae56ef\") " pod="gcp-auth/gcp-auth-certs-patch-mz2l9"
	Oct 07 12:59:08 addons-956205 kubelet[1491]: I1007 12:59:08.593107    1491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkv2c\" (UniqueName: \"kubernetes.io/projected/9b163b5a-2789-4999-8647-6504c5ae56ef-kube-api-access-kkv2c\") pod \"9b163b5a-2789-4999-8647-6504c5ae56ef\" (UID: \"9b163b5a-2789-4999-8647-6504c5ae56ef\") "
	Oct 07 12:59:08 addons-956205 kubelet[1491]: I1007 12:59:08.596376    1491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b163b5a-2789-4999-8647-6504c5ae56ef-kube-api-access-kkv2c" (OuterVolumeSpecName: "kube-api-access-kkv2c") pod "9b163b5a-2789-4999-8647-6504c5ae56ef" (UID: "9b163b5a-2789-4999-8647-6504c5ae56ef"). InnerVolumeSpecName "kube-api-access-kkv2c". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 07 12:59:08 addons-956205 kubelet[1491]: I1007 12:59:08.694677    1491 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsmtf\" (UniqueName: \"kubernetes.io/projected/bcadf404-01b6-44f8-9447-11808cafc25c-kube-api-access-dsmtf\") pod \"bcadf404-01b6-44f8-9447-11808cafc25c\" (UID: \"bcadf404-01b6-44f8-9447-11808cafc25c\") "
	Oct 07 12:59:08 addons-956205 kubelet[1491]: I1007 12:59:08.694835    1491 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-kkv2c\" (UniqueName: \"kubernetes.io/projected/9b163b5a-2789-4999-8647-6504c5ae56ef-kube-api-access-kkv2c\") on node \"addons-956205\" DevicePath \"\""
	Oct 07 12:59:08 addons-956205 kubelet[1491]: I1007 12:59:08.696797    1491 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcadf404-01b6-44f8-9447-11808cafc25c-kube-api-access-dsmtf" (OuterVolumeSpecName: "kube-api-access-dsmtf") pod "bcadf404-01b6-44f8-9447-11808cafc25c" (UID: "bcadf404-01b6-44f8-9447-11808cafc25c"). InnerVolumeSpecName "kube-api-access-dsmtf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 07 12:59:08 addons-956205 kubelet[1491]: I1007 12:59:08.795402    1491 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dsmtf\" (UniqueName: \"kubernetes.io/projected/bcadf404-01b6-44f8-9447-11808cafc25c-kube-api-access-dsmtf\") on node \"addons-956205\" DevicePath \"\""
	Oct 07 12:59:09 addons-956205 kubelet[1491]: I1007 12:59:09.462942    1491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd6ae7c9427fa2336497dc09667d0f37adbdcc455141516ed400a054bb1ff4ec"
	Oct 07 12:59:09 addons-956205 kubelet[1491]: I1007 12:59:09.467561    1491 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3361c77a25d8c872a7e257c09a7c18bf7d642c14d6e3b5149978d9a1742391d"
	Oct 07 12:59:32 addons-956205 kubelet[1491]: I1007 12:59:32.574395    1491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-89d5ffd79-4qqmt" podStartSLOduration=65.634568872 podStartE2EDuration="1m8.574355793s" podCreationTimestamp="2024-10-07 12:58:24 +0000 UTC" firstStartedPulling="2024-10-07 12:59:29.251260685 +0000 UTC m=+165.005651175" lastFinishedPulling="2024-10-07 12:59:32.191047598 +0000 UTC m=+167.945438096" observedRunningTime="2024-10-07 12:59:32.574242039 +0000 UTC m=+168.328632529" watchObservedRunningTime="2024-10-07 12:59:32.574355793 +0000 UTC m=+168.328746283"
	Oct 07 12:59:40 addons-956205 kubelet[1491]: I1007 12:59:40.425257    1491 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b163b5a-2789-4999-8647-6504c5ae56ef" path="/var/lib/kubelet/pods/9b163b5a-2789-4999-8647-6504c5ae56ef/volumes"
	Oct 07 12:59:40 addons-956205 kubelet[1491]: I1007 12:59:40.425732    1491 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bcadf404-01b6-44f8-9447-11808cafc25c" path="/var/lib/kubelet/pods/bcadf404-01b6-44f8-9447-11808cafc25c/volumes"
	Oct 07 12:59:44 addons-956205 kubelet[1491]: I1007 12:59:44.415947    1491 scope.go:117] "RemoveContainer" containerID="b7fe378cf821aca9025a6da688abfde23d067b91080ade684a8450b22f50be02"
	Oct 07 12:59:44 addons-956205 kubelet[1491]: I1007 12:59:44.425987    1491 scope.go:117] "RemoveContainer" containerID="2d9c50bd28595cc5281ae113c8f6317d0b2b5f23f6692bf11d1e27d5ea92ded2"
	Oct 07 12:59:50 addons-956205 kubelet[1491]: I1007 12:59:50.422143    1491 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9kpnr" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 12:59:50 addons-956205 kubelet[1491]: I1007 12:59:50.427255    1491 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae5fbdac-7575-4ab2-a9a4-6b168558109c" path="/var/lib/kubelet/pods/ae5fbdac-7575-4ab2-a9a4-6b168558109c/volumes"
	Oct 07 13:00:14 addons-956205 kubelet[1491]: I1007 13:00:14.422819    1491 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-dfwvg" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 13:00:16 addons-956205 kubelet[1491]: I1007 13:00:16.421040    1491 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-5dl6n" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 13:00:44 addons-956205 kubelet[1491]: I1007 13:00:44.508781    1491 scope.go:117] "RemoveContainer" containerID="5fe71631974eb1629b94090a1ec4adec5607847f6413c7593e8ab95e96b8c26d"
	Oct 07 13:01:01 addons-956205 kubelet[1491]: I1007 13:01:01.421809    1491 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9kpnr" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 13:01:18 addons-956205 kubelet[1491]: I1007 13:01:18.421730    1491 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-dfwvg" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 13:01:31 addons-956205 kubelet[1491]: I1007 13:01:31.421113    1491 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-5dl6n" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 13:02:14 addons-956205 kubelet[1491]: I1007 13:02:14.422286    1491 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-9kpnr" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 13:02:44 addons-956205 kubelet[1491]: I1007 13:02:44.422238    1491 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-dfwvg" secret="" err="secret \"gcp-auth\" not found"
	Oct 07 13:02:48 addons-956205 kubelet[1491]: I1007 13:02:48.421110    1491 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-5dl6n" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [e59907e6fdfad22084342963dc590dccd3ce9ede150ec85a42dc0e5ec70df32d] <==
	I1007 12:56:54.793389       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 12:56:54.822177       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 12:56:54.822251       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 12:56:54.838269       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 12:56:54.841728       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0819899b-9a5b-47eb-9aab-8866d1601658", APIVersion:"v1", ResourceVersion:"550", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-956205_97c7745e-40be-416d-b997-9785d994e7db became leader
	I1007 12:56:54.841769       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-956205_97c7745e-40be-416d-b997-9785d994e7db!
	I1007 12:56:54.942310       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-956205_97c7745e-40be-416d-b997-9785d994e7db!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-956205 -n addons-956205
helpers_test.go:261: (dbg) Run:  kubectl --context addons-956205 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-hzhdp ingress-nginx-admission-patch-db6h9 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-956205 describe pod ingress-nginx-admission-create-hzhdp ingress-nginx-admission-patch-db6h9 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-956205 describe pod ingress-nginx-admission-create-hzhdp ingress-nginx-admission-patch-db6h9 test-job-nginx-0: exit status 1 (84.376223ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-hzhdp" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-db6h9" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-956205 describe pod ingress-nginx-admission-create-hzhdp ingress-nginx-admission-patch-db6h9 test-job-nginx-0: exit status 1
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 addons disable volcano --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-956205 addons disable volcano --alsologtostderr -v=1: (11.297175912s)
--- FAIL: TestAddons/serial/Volcano (211.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (374.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-716021 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1007 13:46:52.883116  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-716021 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m10.904493623s)

                                                
                                                
-- stdout --
	* [old-k8s-version-716021] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-716021" primary control-plane node in "old-k8s-version-716021" cluster
	* Pulling base image v0.0.45-1727731891-master ...
	* Restarting existing docker container for "old-k8s-version-716021" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-716021 addons enable metrics-server
	
	* Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:45:57.694588  788969 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:45:57.695224  788969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:45:57.695260  788969 out.go:358] Setting ErrFile to fd 2...
	I1007 13:45:57.695283  788969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:45:57.695575  788969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
	I1007 13:45:57.696014  788969 out.go:352] Setting JSON to false
	I1007 13:45:57.697026  788969 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12507,"bootTime":1728296251,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1007 13:45:57.697139  788969 start.go:139] virtualization:  
	I1007 13:45:57.700424  788969 out.go:177] * [old-k8s-version-716021] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 13:45:57.703321  788969 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:45:57.703407  788969 notify.go:220] Checking for updates...
	I1007 13:45:57.707732  788969 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:45:57.710980  788969 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 13:45:57.712779  788969 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	I1007 13:45:57.714617  788969 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 13:45:57.716406  788969 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:45:57.718762  788969 config.go:182] Loaded profile config "old-k8s-version-716021": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1007 13:45:57.721151  788969 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1007 13:45:57.722853  788969 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:45:57.764777  788969 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 13:45:57.764910  788969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:45:57.823310  788969 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-07 13:45:57.809709072 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:45:57.823430  788969 docker.go:318] overlay module found
	I1007 13:45:57.825499  788969 out.go:177] * Using the docker driver based on existing profile
	I1007 13:45:57.827658  788969 start.go:297] selected driver: docker
	I1007 13:45:57.827678  788969 start.go:901] validating driver "docker" against &{Name:old-k8s-version-716021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-716021 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:45:57.827791  788969 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:45:57.828425  788969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:45:57.906885  788969 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-07 13:45:57.895919244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:45:57.907292  788969 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:45:57.907331  788969 cni.go:84] Creating CNI manager for ""
	I1007 13:45:57.907375  788969 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 13:45:57.907426  788969 start.go:340] cluster config:
	{Name:old-k8s-version-716021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-716021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:45:57.911472  788969 out.go:177] * Starting "old-k8s-version-716021" primary control-plane node in "old-k8s-version-716021" cluster
	I1007 13:45:57.913740  788969 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1007 13:45:57.916157  788969 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 13:45:57.918623  788969 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1007 13:45:57.918679  788969 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1007 13:45:57.918708  788969 cache.go:56] Caching tarball of preloaded images
	I1007 13:45:57.918703  788969 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 13:45:57.918796  788969 preload.go:172] Found /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1007 13:45:57.918815  788969 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1007 13:45:57.918930  788969 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/config.json ...
	I1007 13:45:57.949506  788969 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 13:45:57.949527  788969 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 13:45:57.949541  788969 cache.go:194] Successfully downloaded all kic artifacts
	I1007 13:45:57.949563  788969 start.go:360] acquireMachinesLock for old-k8s-version-716021: {Name:mk12f5339910ed66eeb98e377e7821a911147d6d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:45:57.949616  788969 start.go:364] duration metric: took 36.118µs to acquireMachinesLock for "old-k8s-version-716021"
	I1007 13:45:57.949635  788969 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:45:57.949648  788969 fix.go:54] fixHost starting: 
	I1007 13:45:57.949948  788969 cli_runner.go:164] Run: docker container inspect old-k8s-version-716021 --format={{.State.Status}}
	I1007 13:45:57.979393  788969 fix.go:112] recreateIfNeeded on old-k8s-version-716021: state=Stopped err=<nil>
	W1007 13:45:57.979427  788969 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:45:57.981762  788969 out.go:177] * Restarting existing docker container for "old-k8s-version-716021" ...
	I1007 13:45:57.983921  788969 cli_runner.go:164] Run: docker start old-k8s-version-716021
	I1007 13:45:58.374372  788969 cli_runner.go:164] Run: docker container inspect old-k8s-version-716021 --format={{.State.Status}}
	I1007 13:45:58.395803  788969 kic.go:430] container "old-k8s-version-716021" state is running.
	I1007 13:45:58.396184  788969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-716021
	I1007 13:45:58.429981  788969 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/config.json ...
	I1007 13:45:58.430206  788969 machine.go:93] provisionDockerMachine start ...
	I1007 13:45:58.430273  788969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-716021
	I1007 13:45:58.459315  788969 main.go:141] libmachine: Using SSH client type: native
	I1007 13:45:58.459577  788969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33799 <nil> <nil>}
	I1007 13:45:58.459586  788969 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:45:58.460372  788969 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47400->127.0.0.1:33799: read: connection reset by peer
	I1007 13:46:01.605211  788969 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-716021
	
	I1007 13:46:01.605260  788969 ubuntu.go:169] provisioning hostname "old-k8s-version-716021"
	I1007 13:46:01.605346  788969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-716021
	I1007 13:46:01.627334  788969 main.go:141] libmachine: Using SSH client type: native
	I1007 13:46:01.627583  788969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33799 <nil> <nil>}
	I1007 13:46:01.627595  788969 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-716021 && echo "old-k8s-version-716021" | sudo tee /etc/hostname
	I1007 13:46:01.787381  788969 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-716021
	
	I1007 13:46:01.787550  788969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-716021
	I1007 13:46:01.809729  788969 main.go:141] libmachine: Using SSH client type: native
	I1007 13:46:01.809984  788969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33799 <nil> <nil>}
	I1007 13:46:01.810001  788969 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-716021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-716021/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-716021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:46:01.954248  788969 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:46:01.954279  788969 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18424-574640/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-574640/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-574640/.minikube}
	I1007 13:46:01.954319  788969 ubuntu.go:177] setting up certificates
	I1007 13:46:01.954331  788969 provision.go:84] configureAuth start
	I1007 13:46:01.954449  788969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-716021
	I1007 13:46:01.976401  788969 provision.go:143] copyHostCerts
	I1007 13:46:01.976488  788969 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-574640/.minikube/ca.pem, removing ...
	I1007 13:46:01.976502  788969 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-574640/.minikube/ca.pem
	I1007 13:46:01.976576  788969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-574640/.minikube/ca.pem (1082 bytes)
	I1007 13:46:01.976683  788969 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-574640/.minikube/cert.pem, removing ...
	I1007 13:46:01.976695  788969 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-574640/.minikube/cert.pem
	I1007 13:46:01.976725  788969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-574640/.minikube/cert.pem (1123 bytes)
	I1007 13:46:01.976785  788969 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-574640/.minikube/key.pem, removing ...
	I1007 13:46:01.976793  788969 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-574640/.minikube/key.pem
	I1007 13:46:01.976820  788969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-574640/.minikube/key.pem (1679 bytes)
	I1007 13:46:01.976872  788969 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-574640/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-716021 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-716021]
	I1007 13:46:02.520737  788969 provision.go:177] copyRemoteCerts
	I1007 13:46:02.520817  788969 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:46:02.520862  788969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-716021
	I1007 13:46:02.538709  788969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/old-k8s-version-716021/id_rsa Username:docker}
	I1007 13:46:02.647679  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:46:02.676915  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1007 13:46:02.705404  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:46:02.733161  788969 provision.go:87] duration metric: took 778.810007ms to configureAuth
	I1007 13:46:02.733207  788969 ubuntu.go:193] setting minikube options for container-runtime
	I1007 13:46:02.733479  788969 config.go:182] Loaded profile config "old-k8s-version-716021": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1007 13:46:02.733494  788969 machine.go:96] duration metric: took 4.30327257s to provisionDockerMachine
	I1007 13:46:02.733503  788969 start.go:293] postStartSetup for "old-k8s-version-716021" (driver="docker")
	I1007 13:46:02.733520  788969 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:46:02.733589  788969 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:46:02.733639  788969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-716021
	I1007 13:46:02.755407  788969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/old-k8s-version-716021/id_rsa Username:docker}
	I1007 13:46:02.856180  788969 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:46:02.860090  788969 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 13:46:02.860131  788969 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 13:46:02.860145  788969 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 13:46:02.860157  788969 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 13:46:02.860169  788969 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-574640/.minikube/addons for local assets ...
	I1007 13:46:02.860228  788969 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-574640/.minikube/files for local assets ...
	I1007 13:46:02.860307  788969 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-574640/.minikube/files/etc/ssl/certs/5801632.pem -> 5801632.pem in /etc/ssl/certs
	I1007 13:46:02.860413  788969 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:46:02.871150  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/files/etc/ssl/certs/5801632.pem --> /etc/ssl/certs/5801632.pem (1708 bytes)
	I1007 13:46:02.900761  788969 start.go:296] duration metric: took 167.235547ms for postStartSetup
	I1007 13:46:02.900856  788969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:46:02.900914  788969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-716021
	I1007 13:46:02.921987  788969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/old-k8s-version-716021/id_rsa Username:docker}
	I1007 13:46:03.025557  788969 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 13:46:03.031549  788969 fix.go:56] duration metric: took 5.081898539s for fixHost
	I1007 13:46:03.031574  788969 start.go:83] releasing machines lock for "old-k8s-version-716021", held for 5.081949337s
	I1007 13:46:03.031649  788969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-716021
	I1007 13:46:03.082473  788969 ssh_runner.go:195] Run: cat /version.json
	I1007 13:46:03.082530  788969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-716021
	I1007 13:46:03.083016  788969 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:46:03.083182  788969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-716021
	I1007 13:46:03.108889  788969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/old-k8s-version-716021/id_rsa Username:docker}
	I1007 13:46:03.147931  788969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/old-k8s-version-716021/id_rsa Username:docker}
	I1007 13:46:03.217328  788969 ssh_runner.go:195] Run: systemctl --version
	I1007 13:46:03.408287  788969 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 13:46:03.419563  788969 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1007 13:46:03.455300  788969 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1007 13:46:03.455379  788969 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:46:03.483648  788969 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 13:46:03.483672  788969 start.go:495] detecting cgroup driver to use...
	I1007 13:46:03.483706  788969 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 13:46:03.483756  788969 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1007 13:46:03.546760  788969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 13:46:03.576131  788969 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:46:03.576208  788969 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:46:03.599529  788969 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:46:03.615209  788969 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:46:03.741109  788969 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:46:03.887411  788969 docker.go:233] disabling docker service ...
	I1007 13:46:03.887488  788969 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:46:03.916072  788969 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:46:03.942521  788969 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:46:04.100234  788969 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:46:04.291244  788969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:46:04.316706  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:46:04.346799  788969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1007 13:46:04.368626  788969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1007 13:46:04.386553  788969 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1007 13:46:04.386631  788969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1007 13:46:04.402588  788969 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 13:46:04.418026  788969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1007 13:46:04.433072  788969 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 13:46:04.447040  788969 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:46:04.458708  788969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1007 13:46:04.470956  788969 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:46:04.482207  788969 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:46:04.492928  788969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:46:04.722770  788969 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1007 13:46:04.954793  788969 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1007 13:46:04.954866  788969 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1007 13:46:04.962580  788969 start.go:563] Will wait 60s for crictl version
	I1007 13:46:04.962644  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:46:04.968084  788969 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:46:05.071375  788969 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1007 13:46:05.071454  788969 ssh_runner.go:195] Run: containerd --version
	I1007 13:46:05.097125  788969 ssh_runner.go:195] Run: containerd --version
	I1007 13:46:05.127117  788969 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1007 13:46:05.130041  788969 cli_runner.go:164] Run: docker network inspect old-k8s-version-716021 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 13:46:05.158932  788969 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1007 13:46:05.163339  788969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:46:05.180459  788969 kubeadm.go:883] updating cluster {Name:old-k8s-version-716021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-716021 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:46:05.180598  788969 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1007 13:46:05.180657  788969 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:46:05.227623  788969 containerd.go:627] all images are preloaded for containerd runtime.
	I1007 13:46:05.227651  788969 containerd.go:534] Images already preloaded, skipping extraction
	I1007 13:46:05.227717  788969 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:46:05.280636  788969 containerd.go:627] all images are preloaded for containerd runtime.
	I1007 13:46:05.280657  788969 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:46:05.280665  788969 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I1007 13:46:05.280774  788969 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-716021 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-716021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:46:05.280839  788969 ssh_runner.go:195] Run: sudo crictl info
	I1007 13:46:05.342407  788969 cni.go:84] Creating CNI manager for ""
	I1007 13:46:05.342430  788969 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 13:46:05.342446  788969 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:46:05.342467  788969 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-716021 NodeName:old-k8s-version-716021 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1007 13:46:05.342596  788969 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-716021"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:46:05.342660  788969 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1007 13:46:05.353056  788969 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:46:05.353129  788969 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:46:05.363154  788969 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1007 13:46:05.383665  788969 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:46:05.404121  788969 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1007 13:46:05.425829  788969 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1007 13:46:05.430038  788969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:46:05.442204  788969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:46:05.556549  788969 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:46:05.573975  788969 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021 for IP: 192.168.76.2
	I1007 13:46:05.574036  788969 certs.go:194] generating shared ca certs ...
	I1007 13:46:05.574076  788969 certs.go:226] acquiring lock for ca certs: {Name:mkb94cd23ae3efb673f2949842bd2c98014816e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:46:05.574263  788969 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-574640/.minikube/ca.key
	I1007 13:46:05.574345  788969 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.key
	I1007 13:46:05.574379  788969 certs.go:256] generating profile certs ...
	I1007 13:46:05.574510  788969 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.key
	I1007 13:46:05.574629  788969 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/apiserver.key.6852896d
	I1007 13:46:05.574712  788969 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/proxy-client.key
	I1007 13:46:05.574957  788969 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/580163.pem (1338 bytes)
	W1007 13:46:05.575035  788969 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-574640/.minikube/certs/580163_empty.pem, impossibly tiny 0 bytes
	I1007 13:46:05.575060  788969 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 13:46:05.575132  788969 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:46:05.575191  788969 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:46:05.575255  788969 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/key.pem (1679 bytes)
	I1007 13:46:05.575339  788969 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/files/etc/ssl/certs/5801632.pem (1708 bytes)
	I1007 13:46:05.576219  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:46:05.663591  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:46:05.701394  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:46:05.763635  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:46:05.795049  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1007 13:46:05.824194  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:46:05.851887  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:46:05.920055  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1007 13:46:05.944873  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/files/etc/ssl/certs/5801632.pem --> /usr/share/ca-certificates/5801632.pem (1708 bytes)
	I1007 13:46:05.969261  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:46:05.993900  788969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/certs/580163.pem --> /usr/share/ca-certificates/580163.pem (1338 bytes)
	I1007 13:46:06.022131  788969 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:46:06.044232  788969 ssh_runner.go:195] Run: openssl version
	I1007 13:46:06.051104  788969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5801632.pem && ln -fs /usr/share/ca-certificates/5801632.pem /etc/ssl/certs/5801632.pem"
	I1007 13:46:06.061762  788969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5801632.pem
	I1007 13:46:06.066240  788969 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 13:06 /usr/share/ca-certificates/5801632.pem
	I1007 13:46:06.066333  788969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5801632.pem
	I1007 13:46:06.074548  788969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5801632.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:46:06.090960  788969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:46:06.104109  788969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:46:06.108723  788969 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:46:06.108826  788969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:46:06.117522  788969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:46:06.128201  788969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/580163.pem && ln -fs /usr/share/ca-certificates/580163.pem /etc/ssl/certs/580163.pem"
	I1007 13:46:06.141805  788969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/580163.pem
	I1007 13:46:06.146629  788969 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 13:06 /usr/share/ca-certificates/580163.pem
	I1007 13:46:06.146733  788969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/580163.pem
	I1007 13:46:06.154918  788969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/580163.pem /etc/ssl/certs/51391683.0"
	I1007 13:46:06.164445  788969 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:46:06.168827  788969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:46:06.176731  788969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:46:06.184583  788969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:46:06.192151  788969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:46:06.199680  788969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:46:06.207156  788969 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:46:06.214541  788969 kubeadm.go:392] StartCluster: {Name:old-k8s-version-716021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-716021 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:46:06.214683  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1007 13:46:06.214771  788969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:46:06.269220  788969 cri.go:89] found id: "fefa6581f4e4cb7fefe7289a78cf684582ea646cd5283484696218c2863765ed"
	I1007 13:46:06.269251  788969 cri.go:89] found id: "54c2e6b5d938cc814a93018f032c376c76d65bc1872f0fa55dd63fe950ff317f"
	I1007 13:46:06.269256  788969 cri.go:89] found id: "1ee06c3a9952c8c73c2b2a6c861f6a265db79248f4d1cef2824ca7604148d547"
	I1007 13:46:06.269260  788969 cri.go:89] found id: "90271f39ba89eca0f9f411179e611fffa8cb7092df3cd7385b2489d67eb7a32d"
	I1007 13:46:06.269294  788969 cri.go:89] found id: "3a2ed108143653900fc42a927e9971546f5f58f58c845162e9ad03a74ef4c19f"
	I1007 13:46:06.269306  788969 cri.go:89] found id: "6ee84abf1b4464314c9cb9e84d60de9b2b00461bb97fe11808df52c7e0f87771"
	I1007 13:46:06.269311  788969 cri.go:89] found id: "dffbe9e7eda4a16ea00f685c861a1d7506d88d6ae76dc1fbd3b528f0186bf960"
	I1007 13:46:06.269314  788969 cri.go:89] found id: "3f8d8d7911069dffad8bf7ce9156d34436c105ed532c7439e0b6bda21c43e87c"
	I1007 13:46:06.269318  788969 cri.go:89] found id: ""
	I1007 13:46:06.269392  788969 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1007 13:46:06.283562  788969 cri.go:116] JSON = null
	W1007 13:46:06.283645  788969 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1007 13:46:06.283740  788969 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:46:06.293219  788969 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 13:46:06.293237  788969 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 13:46:06.293288  788969 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 13:46:06.301828  788969 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 13:46:06.302315  788969 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-716021" does not appear in /home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 13:46:06.302490  788969 kubeconfig.go:62] /home/jenkins/minikube-integration/18424-574640/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-716021" cluster setting kubeconfig missing "old-k8s-version-716021" context setting]
	I1007 13:46:06.302958  788969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/kubeconfig: {Name:mk8cb646df388630470eb87db824f7b511497a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:46:06.304288  788969 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 13:46:06.313992  788969 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1007 13:46:06.314024  788969 kubeadm.go:597] duration metric: took 20.780537ms to restartPrimaryControlPlane
	I1007 13:46:06.314034  788969 kubeadm.go:394] duration metric: took 99.504516ms to StartCluster
	I1007 13:46:06.314054  788969 settings.go:142] acquiring lock: {Name:mk8a7c208419d2453ea37ed5e7d0421609f0d046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:46:06.314113  788969 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 13:46:06.314728  788969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/kubeconfig: {Name:mk8cb646df388630470eb87db824f7b511497a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:46:06.314927  788969 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1007 13:46:06.315205  788969 config.go:182] Loaded profile config "old-k8s-version-716021": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1007 13:46:06.315254  788969 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:46:06.315318  788969 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-716021"
	I1007 13:46:06.315333  788969 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-716021"
	W1007 13:46:06.315340  788969 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:46:06.315361  788969 host.go:66] Checking if "old-k8s-version-716021" exists ...
	I1007 13:46:06.315963  788969 cli_runner.go:164] Run: docker container inspect old-k8s-version-716021 --format={{.State.Status}}
	I1007 13:46:06.317907  788969 addons.go:69] Setting dashboard=true in profile "old-k8s-version-716021"
	I1007 13:46:06.317929  788969 addons.go:234] Setting addon dashboard=true in "old-k8s-version-716021"
	W1007 13:46:06.317936  788969 addons.go:243] addon dashboard should already be in state true
	I1007 13:46:06.317966  788969 host.go:66] Checking if "old-k8s-version-716021" exists ...
	I1007 13:46:06.318434  788969 cli_runner.go:164] Run: docker container inspect old-k8s-version-716021 --format={{.State.Status}}
	I1007 13:46:06.318674  788969 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-716021"
	I1007 13:46:06.318694  788969 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-716021"
	W1007 13:46:06.318701  788969 addons.go:243] addon metrics-server should already be in state true
	I1007 13:46:06.318730  788969 host.go:66] Checking if "old-k8s-version-716021" exists ...
	I1007 13:46:06.319118  788969 cli_runner.go:164] Run: docker container inspect old-k8s-version-716021 --format={{.State.Status}}
	I1007 13:46:06.320627  788969 out.go:177] * Verifying Kubernetes components...
	I1007 13:46:06.323240  788969 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-716021"
	I1007 13:46:06.323272  788969 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-716021"
	I1007 13:46:06.323565  788969 cli_runner.go:164] Run: docker container inspect old-k8s-version-716021 --format={{.State.Status}}
	I1007 13:46:06.325012  788969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:46:06.361615  788969 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:46:06.365645  788969 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:46:06.365734  788969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:46:06.365819  788969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-716021
	I1007 13:46:06.376439  788969 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1007 13:46:06.381815  788969 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1007 13:46:06.384239  788969 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1007 13:46:06.384259  788969 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1007 13:46:06.384327  788969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-716021
	I1007 13:46:06.395373  788969 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-716021"
	W1007 13:46:06.395404  788969 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:46:06.395429  788969 host.go:66] Checking if "old-k8s-version-716021" exists ...
	I1007 13:46:06.395831  788969 cli_runner.go:164] Run: docker container inspect old-k8s-version-716021 --format={{.State.Status}}
	I1007 13:46:06.445221  788969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/old-k8s-version-716021/id_rsa Username:docker}
	I1007 13:46:06.446328  788969 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:46:06.448682  788969 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:46:06.448705  788969 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:46:06.448787  788969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-716021
	I1007 13:46:06.456194  788969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/old-k8s-version-716021/id_rsa Username:docker}
	I1007 13:46:06.472945  788969 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:46:06.472985  788969 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:46:06.473088  788969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-716021
	I1007 13:46:06.473411  788969 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:46:06.524484  788969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/old-k8s-version-716021/id_rsa Username:docker}
	I1007 13:46:06.529183  788969 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-716021" to be "Ready" ...
	I1007 13:46:06.552555  788969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33799 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/old-k8s-version-716021/id_rsa Username:docker}
	I1007 13:46:06.600204  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:46:06.685585  788969 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1007 13:46:06.685608  788969 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1007 13:46:06.728003  788969 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1007 13:46:06.728078  788969 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1007 13:46:06.735239  788969 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:46:06.735315  788969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:46:06.798291  788969 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1007 13:46:06.798316  788969 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1007 13:46:06.798596  788969 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:46:06.798727  788969 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:46:06.819661  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:46:06.891016  788969 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1007 13:46:06.891036  788969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1007 13:46:06.904443  788969 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:46:06.904528  788969 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W1007 13:46:06.935870  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:06.935911  788969 retry.go:31] will retry after 261.633317ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:06.959201  788969 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1007 13:46:06.959228  788969 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1007 13:46:07.023335  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 13:46:07.037048  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:07.037134  788969 retry.go:31] will retry after 133.972785ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:07.050226  788969 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1007 13:46:07.050307  788969 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1007 13:46:07.108363  788969 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1007 13:46:07.108444  788969 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1007 13:46:07.148807  788969 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1007 13:46:07.148891  788969 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1007 13:46:07.171856  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1007 13:46:07.189820  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:07.189930  788969 retry.go:31] will retry after 354.295601ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:07.198204  788969 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 13:46:07.198286  788969 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1007 13:46:07.198320  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:46:07.294700  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1007 13:46:07.446215  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:07.446247  788969 retry.go:31] will retry after 551.906217ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 13:46:07.446296  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:07.446302  788969 retry.go:31] will retry after 532.642273ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 13:46:07.504935  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:07.504967  788969 retry.go:31] will retry after 371.680389ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:07.544830  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 13:46:07.650246  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:07.650284  788969 retry.go:31] will retry after 467.621154ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:07.876884  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 13:46:07.979962  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1007 13:46:07.980957  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:07.980993  788969 retry.go:31] will retry after 257.463197ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:07.998500  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:46:08.118493  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 13:46:08.141082  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:08.141118  788969 retry.go:31] will retry after 767.911389ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:08.239279  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1007 13:46:08.288112  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:08.288154  788969 retry.go:31] will retry after 636.104053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 13:46:08.438247  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:08.438287  788969 retry.go:31] will retry after 570.751236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 13:46:08.468973  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:08.469004  788969 retry.go:31] will retry after 353.559317ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:08.530876  788969 node_ready.go:53] error getting node "old-k8s-version-716021": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-716021": dial tcp 192.168.76.2:8443: connect: connection refused
	I1007 13:46:08.823430  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 13:46:08.909786  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1007 13:46:08.923101  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:08.923133  788969 retry.go:31] will retry after 1.018024514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:08.925376  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:46:09.009669  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 13:46:09.163255  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:09.163286  788969 retry.go:31] will retry after 1.245239285s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 13:46:09.163330  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:09.163338  788969 retry.go:31] will retry after 955.898971ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 13:46:09.233102  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:09.233135  788969 retry.go:31] will retry after 1.263712163s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:09.941433  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1007 13:46:10.065696  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:10.065731  788969 retry.go:31] will retry after 1.32022848s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:10.120074  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1007 13:46:10.227377  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:10.227417  788969 retry.go:31] will retry after 731.297685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:10.409641  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:46:10.497166  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 13:46:10.503604  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:10.503642  788969 retry.go:31] will retry after 1.426551085s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 13:46:10.607567  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:10.607605  788969 retry.go:31] will retry after 892.983574ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:10.958912  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:46:11.030676  788969 node_ready.go:53] error getting node "old-k8s-version-716021": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-716021": dial tcp 192.168.76.2:8443: connect: connection refused
	W1007 13:46:11.058236  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:11.058266  788969 retry.go:31] will retry after 2.472362831s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:11.386901  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1007 13:46:11.482369  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:11.482414  788969 retry.go:31] will retry after 2.391254743s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:11.501681  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 13:46:11.610753  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:11.610790  788969 retry.go:31] will retry after 2.662015924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:11.931359  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1007 13:46:12.038997  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:12.039046  788969 retry.go:31] will retry after 1.724850441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:13.529756  788969 node_ready.go:53] error getting node "old-k8s-version-716021": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-716021": dial tcp 192.168.76.2:8443: connect: connection refused
	I1007 13:46:13.530856  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1007 13:46:13.639787  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:13.639822  788969 retry.go:31] will retry after 3.963151321s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:13.764148  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:46:13.874519  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1007 13:46:13.889095  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:13.889131  788969 retry.go:31] will retry after 1.744963465s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1007 13:46:13.999476  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:13.999522  788969 retry.go:31] will retry after 2.259482521s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:14.273512  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1007 13:46:14.385503  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:14.385537  788969 retry.go:31] will retry after 2.257107138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:15.530151  788969 node_ready.go:53] error getting node "old-k8s-version-716021": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-716021": dial tcp 192.168.76.2:8443: connect: connection refused
	I1007 13:46:15.634256  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1007 13:46:15.997416  788969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:15.997446  788969 retry.go:31] will retry after 2.819310108s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1007 13:46:16.259794  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 13:46:16.642845  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:46:17.604138  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:46:18.817334  788969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:46:26.030516  788969 node_ready.go:53] error getting node "old-k8s-version-716021": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-716021": net/http: TLS handshake timeout
	I1007 13:46:26.580155  788969 node_ready.go:49] node "old-k8s-version-716021" has status "Ready":"True"
	I1007 13:46:26.580227  788969 node_ready.go:38] duration metric: took 20.051002015s for node "old-k8s-version-716021" to be "Ready" ...
	I1007 13:46:26.580253  788969 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:46:27.092979  788969 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-fz8dj" in "kube-system" namespace to be "Ready" ...
	I1007 13:46:27.249519  788969 pod_ready.go:93] pod "coredns-74ff55c5b-fz8dj" in "kube-system" namespace has status "Ready":"True"
	I1007 13:46:27.249590  788969 pod_ready.go:82] duration metric: took 156.515606ms for pod "coredns-74ff55c5b-fz8dj" in "kube-system" namespace to be "Ready" ...
	I1007 13:46:27.249619  788969 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-716021" in "kube-system" namespace to be "Ready" ...
	I1007 13:46:27.296638  788969 pod_ready.go:93] pod "etcd-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"True"
	I1007 13:46:27.296720  788969 pod_ready.go:82] duration metric: took 47.080454ms for pod "etcd-old-k8s-version-716021" in "kube-system" namespace to be "Ready" ...
	I1007 13:46:27.296752  788969 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-716021" in "kube-system" namespace to be "Ready" ...
	I1007 13:46:29.312749  788969 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:30.571373  788969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (14.311461894s)
	I1007 13:46:30.571656  788969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (13.928778686s)
	I1007 13:46:30.571841  788969 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-716021"
	I1007 13:46:30.571727  788969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (12.967565768s)
	I1007 13:46:30.571821  788969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.754408481s)
	I1007 13:46:30.573460  788969 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-716021 addons enable metrics-server
	
	I1007 13:46:30.598552  788969 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I1007 13:46:30.600980  788969 addons.go:510] duration metric: took 24.285717237s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I1007 13:46:31.806981  788969 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:33.810705  788969 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:36.304379  788969 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"True"
	I1007 13:46:36.304451  788969 pod_ready.go:82] duration metric: took 9.007665991s for pod "kube-apiserver-old-k8s-version-716021" in "kube-system" namespace to be "Ready" ...
	I1007 13:46:36.304479  788969 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace to be "Ready" ...
	I1007 13:46:38.313068  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:40.810765  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:43.311100  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:45.327161  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:47.812665  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:50.312343  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:52.813812  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:55.312032  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:57.810669  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:46:59.810779  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:01.817163  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:04.312106  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:06.314767  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:08.811500  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:10.811668  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:12.815043  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:14.815102  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:17.310444  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:19.310986  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:21.311478  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:23.313778  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:25.811416  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:27.820147  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:29.822185  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:32.312523  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:34.314786  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:36.824207  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:39.312769  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:41.811539  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:43.819839  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:46.311911  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:48.312191  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:50.811830  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:53.310894  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:55.311229  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:56.311516  788969 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"True"
	I1007 13:47:56.311544  788969 pod_ready.go:82] duration metric: took 1m20.007043994s for pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:56.311557  788969 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hdch9" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:56.317840  788969 pod_ready.go:93] pod "kube-proxy-hdch9" in "kube-system" namespace has status "Ready":"True"
	I1007 13:47:56.317869  788969 pod_ready.go:82] duration metric: took 6.304821ms for pod "kube-proxy-hdch9" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:56.317882  788969 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-716021" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:56.323346  788969 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"True"
	I1007 13:47:56.323374  788969 pod_ready.go:82] duration metric: took 5.462529ms for pod "kube-scheduler-old-k8s-version-716021" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:56.323387  788969 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:58.330847  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:00.496638  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:02.830995  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:05.330608  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:07.378265  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:09.829529  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:11.830063  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:13.830398  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:16.330624  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:18.829935  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:20.834962  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:23.329605  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:25.330833  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:27.831879  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:30.331107  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:32.829901  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:35.330367  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:37.330969  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:39.829226  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:41.829841  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:44.329644  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:46.330697  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:48.330894  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:50.829621  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:52.830554  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:55.329892  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:57.832139  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:00.381288  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:02.829594  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:04.831500  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:07.329846  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:09.329971  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:11.330024  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:13.830806  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:15.831685  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:17.853965  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:20.330145  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:22.330226  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:24.829579  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:26.829872  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:28.830038  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:30.832511  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:33.330702  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:35.331136  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:37.834831  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:40.330871  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:42.334259  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:44.830040  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:46.831538  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:49.329922  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:51.330390  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:53.829974  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:56.330400  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:58.829116  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:00.830835  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:03.329964  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:05.330720  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:07.833777  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:10.331002  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:12.830194  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:14.831603  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:17.330076  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:19.330317  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:21.330539  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:23.829440  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:25.832290  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:28.330077  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:30.330410  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:32.830347  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:35.329841  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:37.330605  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:39.333940  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:41.830665  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:44.329814  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:46.330801  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:48.835715  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:51.330141  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:53.330922  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:55.830596  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:57.833565  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:00.410864  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:02.830510  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:05.330789  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:07.831297  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:10.330220  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:12.330573  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:14.830258  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:17.330295  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:19.330834  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:21.829496  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:23.829855  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:25.830331  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:27.839887  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:30.335084  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:32.829640  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:34.830683  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:37.329315  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:39.330613  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:41.829295  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:43.833110  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:46.329539  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:48.335702  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:50.829593  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:52.831395  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:55.330997  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:56.334279  788969 pod_ready.go:82] duration metric: took 4m0.010877247s for pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace to be "Ready" ...
	E1007 13:51:56.334361  788969 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1007 13:51:56.334374  788969 pod_ready.go:39] duration metric: took 5m29.754093121s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:51:56.334389  788969 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:51:56.334457  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:51:56.334579  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:51:56.412777  788969 cri.go:89] found id: "6438e2a98b44ed7687544147d7fde2facece2b6119ba86ffa996a5b8e7019da7"
	I1007 13:51:56.412797  788969 cri.go:89] found id: "dffbe9e7eda4a16ea00f685c861a1d7506d88d6ae76dc1fbd3b528f0186bf960"
	I1007 13:51:56.412802  788969 cri.go:89] found id: ""
	I1007 13:51:56.412810  788969 logs.go:282] 2 containers: [6438e2a98b44ed7687544147d7fde2facece2b6119ba86ffa996a5b8e7019da7 dffbe9e7eda4a16ea00f685c861a1d7506d88d6ae76dc1fbd3b528f0186bf960]
	I1007 13:51:56.412866  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.422463  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.429934  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1007 13:51:56.430014  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:51:56.500290  788969 cri.go:89] found id: "538e7c613d2fdfcf8bdf655918adbdaa8c80e7d22ee153d71d980bad173f6cd1"
	I1007 13:51:56.500315  788969 cri.go:89] found id: "3f8d8d7911069dffad8bf7ce9156d34436c105ed532c7439e0b6bda21c43e87c"
	I1007 13:51:56.500320  788969 cri.go:89] found id: ""
	I1007 13:51:56.500327  788969 logs.go:282] 2 containers: [538e7c613d2fdfcf8bdf655918adbdaa8c80e7d22ee153d71d980bad173f6cd1 3f8d8d7911069dffad8bf7ce9156d34436c105ed532c7439e0b6bda21c43e87c]
	I1007 13:51:56.500384  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.505493  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.511697  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1007 13:51:56.511775  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:51:56.568360  788969 cri.go:89] found id: "b970f66a7b788465fb1b5efff7470a2a13241205a4b3871615987cd5e8185c0b"
	I1007 13:51:56.568387  788969 cri.go:89] found id: "fefa6581f4e4cb7fefe7289a78cf684582ea646cd5283484696218c2863765ed"
	I1007 13:51:56.568392  788969 cri.go:89] found id: ""
	I1007 13:51:56.568399  788969 logs.go:282] 2 containers: [b970f66a7b788465fb1b5efff7470a2a13241205a4b3871615987cd5e8185c0b fefa6581f4e4cb7fefe7289a78cf684582ea646cd5283484696218c2863765ed]
	I1007 13:51:56.568468  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.572933  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.577480  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:51:56.577558  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:51:56.637806  788969 cri.go:89] found id: "1e75bdb1fcc164ca7ea09aaa49994e75347b13bbf4549844b1471555f94af297"
	I1007 13:51:56.637834  788969 cri.go:89] found id: "3a2ed108143653900fc42a927e9971546f5f58f58c845162e9ad03a74ef4c19f"
	I1007 13:51:56.637838  788969 cri.go:89] found id: ""
	I1007 13:51:56.637846  788969 logs.go:282] 2 containers: [1e75bdb1fcc164ca7ea09aaa49994e75347b13bbf4549844b1471555f94af297 3a2ed108143653900fc42a927e9971546f5f58f58c845162e9ad03a74ef4c19f]
	I1007 13:51:56.637918  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.643411  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.647884  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:51:56.647957  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:51:56.750655  788969 cri.go:89] found id: "b47f6084fbcd06d9de3d640a50b2eedabdcfa0e9e99795313dbbf409ba0b34ba"
	I1007 13:51:56.750682  788969 cri.go:89] found id: "90271f39ba89eca0f9f411179e611fffa8cb7092df3cd7385b2489d67eb7a32d"
	I1007 13:51:56.750687  788969 cri.go:89] found id: ""
	I1007 13:51:56.750694  788969 logs.go:282] 2 containers: [b47f6084fbcd06d9de3d640a50b2eedabdcfa0e9e99795313dbbf409ba0b34ba 90271f39ba89eca0f9f411179e611fffa8cb7092df3cd7385b2489d67eb7a32d]
	I1007 13:51:56.750755  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.761341  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.767207  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:51:56.767287  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:51:56.854493  788969 cri.go:89] found id: "a6e224cfa70000e35a720f8c9ed9661a70147bc5dc65460934496ec9b288fe06"
	I1007 13:51:56.854525  788969 cri.go:89] found id: "6ee84abf1b4464314c9cb9e84d60de9b2b00461bb97fe11808df52c7e0f87771"
	I1007 13:51:56.854531  788969 cri.go:89] found id: ""
	I1007 13:51:56.854538  788969 logs.go:282] 2 containers: [a6e224cfa70000e35a720f8c9ed9661a70147bc5dc65460934496ec9b288fe06 6ee84abf1b4464314c9cb9e84d60de9b2b00461bb97fe11808df52c7e0f87771]
	I1007 13:51:56.854606  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.859331  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.863262  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1007 13:51:56.863350  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:51:56.940414  788969 cri.go:89] found id: "c44e873fb63063161eea0a9a33fcb424a1fca04659773868480c9079f14fcde3"
	I1007 13:51:56.940437  788969 cri.go:89] found id: "54c2e6b5d938cc814a93018f032c376c76d65bc1872f0fa55dd63fe950ff317f"
	I1007 13:51:56.940442  788969 cri.go:89] found id: ""
	I1007 13:51:56.940450  788969 logs.go:282] 2 containers: [c44e873fb63063161eea0a9a33fcb424a1fca04659773868480c9079f14fcde3 54c2e6b5d938cc814a93018f032c376c76d65bc1872f0fa55dd63fe950ff317f]
	I1007 13:51:56.940513  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.946919  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.951803  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:51:56.951885  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:51:57.012766  788969 cri.go:89] found id: "02588bae23d099926c85f07f669f2281dd82887c6cda051d44c64293d25ce608"
	I1007 13:51:57.012792  788969 cri.go:89] found id: ""
	I1007 13:51:57.012800  788969 logs.go:282] 1 containers: [02588bae23d099926c85f07f669f2281dd82887c6cda051d44c64293d25ce608]
	I1007 13:51:57.012863  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:57.017491  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1007 13:51:57.017580  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1007 13:51:57.075862  788969 cri.go:89] found id: "94f42c6c49fd89fb1387486b3bfb41e7d9ad24f923a9cbfb3757cab9ba0d589c"
	I1007 13:51:57.075894  788969 cri.go:89] found id: "3e11e0da9f75ad4dd8fcceb6b095c49ccfce3438e6b64b5adf91722fb701d656"
	I1007 13:51:57.075899  788969 cri.go:89] found id: ""
	I1007 13:51:57.075906  788969 logs.go:282] 2 containers: [94f42c6c49fd89fb1387486b3bfb41e7d9ad24f923a9cbfb3757cab9ba0d589c 3e11e0da9f75ad4dd8fcceb6b095c49ccfce3438e6b64b5adf91722fb701d656]
	I1007 13:51:57.075967  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:57.081400  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:57.085944  788969 logs.go:123] Gathering logs for containerd ...
	I1007 13:51:57.085968  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1007 13:51:57.162686  788969 logs.go:123] Gathering logs for dmesg ...
	I1007 13:51:57.162733  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:51:57.179955  788969 logs.go:123] Gathering logs for kube-controller-manager [6ee84abf1b4464314c9cb9e84d60de9b2b00461bb97fe11808df52c7e0f87771] ...
	I1007 13:51:57.179991  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ee84abf1b4464314c9cb9e84d60de9b2b00461bb97fe11808df52c7e0f87771"
	I1007 13:51:57.306063  788969 logs.go:123] Gathering logs for etcd [3f8d8d7911069dffad8bf7ce9156d34436c105ed532c7439e0b6bda21c43e87c] ...
	I1007 13:51:57.306099  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f8d8d7911069dffad8bf7ce9156d34436c105ed532c7439e0b6bda21c43e87c"
	I1007 13:51:57.357347  788969 logs.go:123] Gathering logs for coredns [fefa6581f4e4cb7fefe7289a78cf684582ea646cd5283484696218c2863765ed] ...
	I1007 13:51:57.357378  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fefa6581f4e4cb7fefe7289a78cf684582ea646cd5283484696218c2863765ed"
	I1007 13:51:57.419970  788969 logs.go:123] Gathering logs for kube-controller-manager [a6e224cfa70000e35a720f8c9ed9661a70147bc5dc65460934496ec9b288fe06] ...
	I1007 13:51:57.420003  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6e224cfa70000e35a720f8c9ed9661a70147bc5dc65460934496ec9b288fe06"
	I1007 13:51:57.497427  788969 logs.go:123] Gathering logs for kubelet ...
	I1007 13:51:57.497465  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 13:51:57.558680  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:26 old-k8s-version-716021 kubelet[666]: E1007 13:46:26.808783     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-716021" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-716021' and this object
	W1007 13:51:57.559023  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:26 old-k8s-version-716021 kubelet[666]: E1007 13:46:26.854809     666 reflector.go:138] object-"kube-system"/"kube-proxy-token-85z2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-85z2f" is forbidden: User "system:node:old-k8s-version-716021" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-716021' and this object
	W1007 13:51:57.562387  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:26 old-k8s-version-716021 kubelet[666]: E1007 13:46:26.856421     666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-716021" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-716021' and this object
	W1007 13:51:57.562607  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:26 old-k8s-version-716021 kubelet[666]: E1007 13:46:26.856819     666 reflector.go:138] object-"kube-system"/"coredns-token-kp44b": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-kp44b" is forbidden: User "system:node:old-k8s-version-716021" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-716021' and this object
	W1007 13:51:57.567093  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:28 old-k8s-version-716021 kubelet[666]: E1007 13:46:28.915071     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1007 13:51:57.567283  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:29 old-k8s-version-716021 kubelet[666]: E1007 13:46:29.091904     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.570040  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:43 old-k8s-version-716021 kubelet[666]: E1007 13:46:43.213529     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1007 13:51:57.571971  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:51 old-k8s-version-716021 kubelet[666]: E1007 13:46:51.378418     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.572429  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:52 old-k8s-version-716021 kubelet[666]: E1007 13:46:52.384192     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.572755  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:53 old-k8s-version-716021 kubelet[666]: E1007 13:46:53.386690     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.573266  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:57 old-k8s-version-716021 kubelet[666]: E1007 13:46:57.207180     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.573720  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:01 old-k8s-version-716021 kubelet[666]: E1007 13:47:01.413751     666 pod_workers.go:191] Error syncing pod 4cfcc06f-69c5-42d6-bb20-67a3d942cfb0 ("storage-provisioner_kube-system(4cfcc06f-69c5-42d6-bb20-67a3d942cfb0)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4cfcc06f-69c5-42d6-bb20-67a3d942cfb0)"
	W1007 13:51:57.574645  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:05 old-k8s-version-716021 kubelet[666]: E1007 13:47:05.430539     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.577065  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:09 old-k8s-version-716021 kubelet[666]: E1007 13:47:09.218614     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1007 13:51:57.577392  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:12 old-k8s-version-716021 kubelet[666]: E1007 13:47:12.186154     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.577711  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:23 old-k8s-version-716021 kubelet[666]: E1007 13:47:23.206886     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.578040  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:24 old-k8s-version-716021 kubelet[666]: E1007 13:47:24.209098     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.578224  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:34 old-k8s-version-716021 kubelet[666]: E1007 13:47:34.211370     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.578812  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:40 old-k8s-version-716021 kubelet[666]: E1007 13:47:40.564455     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.579137  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:42 old-k8s-version-716021 kubelet[666]: E1007 13:47:42.186663     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.579319  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:47 old-k8s-version-716021 kubelet[666]: E1007 13:47:47.209688     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.579644  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:54 old-k8s-version-716021 kubelet[666]: E1007 13:47:54.207353     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.582062  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:59 old-k8s-version-716021 kubelet[666]: E1007 13:47:59.222600     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1007 13:51:57.582424  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:07 old-k8s-version-716021 kubelet[666]: E1007 13:48:07.206479     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.582648  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:12 old-k8s-version-716021 kubelet[666]: E1007 13:48:12.222632     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.583311  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:20 old-k8s-version-716021 kubelet[666]: E1007 13:48:20.682098     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.583698  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:22 old-k8s-version-716021 kubelet[666]: E1007 13:48:22.186594     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.583912  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:23 old-k8s-version-716021 kubelet[666]: E1007 13:48:23.206902     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.584114  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:35 old-k8s-version-716021 kubelet[666]: E1007 13:48:35.206832     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.584442  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:36 old-k8s-version-716021 kubelet[666]: E1007 13:48:36.206609     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.584626  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:46 old-k8s-version-716021 kubelet[666]: E1007 13:48:46.206819     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.584987  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:49 old-k8s-version-716021 kubelet[666]: E1007 13:48:49.206451     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.585172  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:01 old-k8s-version-716021 kubelet[666]: E1007 13:49:01.206862     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.585505  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:04 old-k8s-version-716021 kubelet[666]: E1007 13:49:04.210397     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.585696  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:16 old-k8s-version-716021 kubelet[666]: E1007 13:49:16.206769     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.586043  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:19 old-k8s-version-716021 kubelet[666]: E1007 13:49:19.206428     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.588462  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:30 old-k8s-version-716021 kubelet[666]: E1007 13:49:30.231019     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1007 13:51:57.588788  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:31 old-k8s-version-716021 kubelet[666]: E1007 13:49:31.206480     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.588970  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:45 old-k8s-version-716021 kubelet[666]: E1007 13:49:45.207213     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.589558  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:46 old-k8s-version-716021 kubelet[666]: E1007 13:49:46.907884     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.589896  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:52 old-k8s-version-716021 kubelet[666]: E1007 13:49:52.187043     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.590078  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:57 old-k8s-version-716021 kubelet[666]: E1007 13:49:57.214713     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.590404  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:05 old-k8s-version-716021 kubelet[666]: E1007 13:50:05.206555     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.590593  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:08 old-k8s-version-716021 kubelet[666]: E1007 13:50:08.211199     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.590918  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:17 old-k8s-version-716021 kubelet[666]: E1007 13:50:17.206916     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.591102  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:19 old-k8s-version-716021 kubelet[666]: E1007 13:50:19.206748     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.591430  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:28 old-k8s-version-716021 kubelet[666]: E1007 13:50:28.207831     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.591612  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:30 old-k8s-version-716021 kubelet[666]: E1007 13:50:30.211659     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.591939  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:40 old-k8s-version-716021 kubelet[666]: E1007 13:50:40.210379     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.592120  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:45 old-k8s-version-716021 kubelet[666]: E1007 13:50:45.207104     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.592445  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:53 old-k8s-version-716021 kubelet[666]: E1007 13:50:53.206980     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.592626  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:57 old-k8s-version-716021 kubelet[666]: E1007 13:50:57.206804     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.592951  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:08 old-k8s-version-716021 kubelet[666]: E1007 13:51:08.206532     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.593132  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:08 old-k8s-version-716021 kubelet[666]: E1007 13:51:08.207812     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.593313  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:22 old-k8s-version-716021 kubelet[666]: E1007 13:51:22.206879     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.593637  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:22 old-k8s-version-716021 kubelet[666]: E1007 13:51:22.208036     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.593845  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:33 old-k8s-version-716021 kubelet[666]: E1007 13:51:33.206925     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.594174  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:34 old-k8s-version-716021 kubelet[666]: E1007 13:51:34.206498     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.594357  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:47 old-k8s-version-716021 kubelet[666]: E1007 13:51:47.231399     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.594690  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:48 old-k8s-version-716021 kubelet[666]: E1007 13:51:48.206953     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	I1007 13:51:57.594701  788969 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:51:57.594714  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:51:57.744792  788969 logs.go:123] Gathering logs for etcd [538e7c613d2fdfcf8bdf655918adbdaa8c80e7d22ee153d71d980bad173f6cd1] ...
	I1007 13:51:57.744823  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 538e7c613d2fdfcf8bdf655918adbdaa8c80e7d22ee153d71d980bad173f6cd1"
	I1007 13:51:57.789489  788969 logs.go:123] Gathering logs for coredns [b970f66a7b788465fb1b5efff7470a2a13241205a4b3871615987cd5e8185c0b] ...
	I1007 13:51:57.789522  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b970f66a7b788465fb1b5efff7470a2a13241205a4b3871615987cd5e8185c0b"
	I1007 13:51:57.841342  788969 logs.go:123] Gathering logs for kube-scheduler [3a2ed108143653900fc42a927e9971546f5f58f58c845162e9ad03a74ef4c19f] ...
	I1007 13:51:57.841376  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a2ed108143653900fc42a927e9971546f5f58f58c845162e9ad03a74ef4c19f"
	I1007 13:51:57.885042  788969 logs.go:123] Gathering logs for kube-proxy [b47f6084fbcd06d9de3d640a50b2eedabdcfa0e9e99795313dbbf409ba0b34ba] ...
	I1007 13:51:57.885079  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b47f6084fbcd06d9de3d640a50b2eedabdcfa0e9e99795313dbbf409ba0b34ba"
	I1007 13:51:57.928690  788969 logs.go:123] Gathering logs for kube-proxy [90271f39ba89eca0f9f411179e611fffa8cb7092df3cd7385b2489d67eb7a32d] ...
	I1007 13:51:57.928720  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90271f39ba89eca0f9f411179e611fffa8cb7092df3cd7385b2489d67eb7a32d"
	I1007 13:51:57.966663  788969 logs.go:123] Gathering logs for kindnet [c44e873fb63063161eea0a9a33fcb424a1fca04659773868480c9079f14fcde3] ...
	I1007 13:51:57.966697  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c44e873fb63063161eea0a9a33fcb424a1fca04659773868480c9079f14fcde3"
	I1007 13:51:58.025474  788969 logs.go:123] Gathering logs for kube-apiserver [6438e2a98b44ed7687544147d7fde2facece2b6119ba86ffa996a5b8e7019da7] ...
	I1007 13:51:58.025511  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6438e2a98b44ed7687544147d7fde2facece2b6119ba86ffa996a5b8e7019da7"
	I1007 13:51:58.099784  788969 logs.go:123] Gathering logs for kube-apiserver [dffbe9e7eda4a16ea00f685c861a1d7506d88d6ae76dc1fbd3b528f0186bf960] ...
	I1007 13:51:58.099828  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dffbe9e7eda4a16ea00f685c861a1d7506d88d6ae76dc1fbd3b528f0186bf960"
	I1007 13:51:58.156884  788969 logs.go:123] Gathering logs for container status ...
	I1007 13:51:58.156916  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:51:58.245817  788969 logs.go:123] Gathering logs for kubernetes-dashboard [02588bae23d099926c85f07f669f2281dd82887c6cda051d44c64293d25ce608] ...
	I1007 13:51:58.245848  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02588bae23d099926c85f07f669f2281dd82887c6cda051d44c64293d25ce608"
	I1007 13:51:58.293375  788969 logs.go:123] Gathering logs for storage-provisioner [94f42c6c49fd89fb1387486b3bfb41e7d9ad24f923a9cbfb3757cab9ba0d589c] ...
	I1007 13:51:58.293411  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94f42c6c49fd89fb1387486b3bfb41e7d9ad24f923a9cbfb3757cab9ba0d589c"
	I1007 13:51:58.333307  788969 logs.go:123] Gathering logs for storage-provisioner [3e11e0da9f75ad4dd8fcceb6b095c49ccfce3438e6b64b5adf91722fb701d656] ...
	I1007 13:51:58.333338  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e11e0da9f75ad4dd8fcceb6b095c49ccfce3438e6b64b5adf91722fb701d656"
	I1007 13:51:58.374185  788969 logs.go:123] Gathering logs for kube-scheduler [1e75bdb1fcc164ca7ea09aaa49994e75347b13bbf4549844b1471555f94af297] ...
	I1007 13:51:58.374218  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e75bdb1fcc164ca7ea09aaa49994e75347b13bbf4549844b1471555f94af297"
	I1007 13:51:58.414799  788969 logs.go:123] Gathering logs for kindnet [54c2e6b5d938cc814a93018f032c376c76d65bc1872f0fa55dd63fe950ff317f] ...
	I1007 13:51:58.414829  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c2e6b5d938cc814a93018f032c376c76d65bc1872f0fa55dd63fe950ff317f"
	I1007 13:51:58.473608  788969 out.go:358] Setting ErrFile to fd 2...
	I1007 13:51:58.473634  788969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 13:51:58.473748  788969 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1007 13:51:58.473763  788969 out.go:270]   Oct 07 13:51:22 old-k8s-version-716021 kubelet[666]: E1007 13:51:22.208036     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	  Oct 07 13:51:22 old-k8s-version-716021 kubelet[666]: E1007 13:51:22.208036     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:58.473777  788969 out.go:270]   Oct 07 13:51:33 old-k8s-version-716021 kubelet[666]: E1007 13:51:33.206925     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 07 13:51:33 old-k8s-version-716021 kubelet[666]: E1007 13:51:33.206925     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:58.473786  788969 out.go:270]   Oct 07 13:51:34 old-k8s-version-716021 kubelet[666]: E1007 13:51:34.206498     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	  Oct 07 13:51:34 old-k8s-version-716021 kubelet[666]: E1007 13:51:34.206498     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:58.473798  788969 out.go:270]   Oct 07 13:51:47 old-k8s-version-716021 kubelet[666]: E1007 13:51:47.231399     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 07 13:51:47 old-k8s-version-716021 kubelet[666]: E1007 13:51:47.231399     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:58.473804  788969 out.go:270]   Oct 07 13:51:48 old-k8s-version-716021 kubelet[666]: E1007 13:51:48.206953     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	  Oct 07 13:51:48 old-k8s-version-716021 kubelet[666]: E1007 13:51:48.206953     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	I1007 13:51:58.473811  788969 out.go:358] Setting ErrFile to fd 2...
	I1007 13:51:58.473817  788969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:52:08.474751  788969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:52:08.488321  788969 api_server.go:72] duration metric: took 6m2.173367121s to wait for apiserver process to appear ...
	I1007 13:52:08.488351  788969 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:52:08.490582  788969 out.go:201] 
	W1007 13:52:08.492328  788969 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W1007 13:52:08.492353  788969 out.go:270] * 
	* 
	W1007 13:52:08.493335  788969 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:52:08.495726  788969 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-716021 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-716021
helpers_test.go:235: (dbg) docker inspect old-k8s-version-716021:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce840d223dfaab846381ea462f7f0deae441d4071b219570d5e4ad687822862e",
	        "Created": "2024-10-07T13:43:05.221371985Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 789183,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-07T13:45:58.146230129Z",
	            "FinishedAt": "2024-10-07T13:45:57.006420303Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/ce840d223dfaab846381ea462f7f0deae441d4071b219570d5e4ad687822862e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce840d223dfaab846381ea462f7f0deae441d4071b219570d5e4ad687822862e/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce840d223dfaab846381ea462f7f0deae441d4071b219570d5e4ad687822862e/hosts",
	        "LogPath": "/var/lib/docker/containers/ce840d223dfaab846381ea462f7f0deae441d4071b219570d5e4ad687822862e/ce840d223dfaab846381ea462f7f0deae441d4071b219570d5e4ad687822862e-json.log",
	        "Name": "/old-k8s-version-716021",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-716021:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-716021",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c0e8f50da5990a6bedf8833813b7776b31886924c91cf3beb519d1c9a4cb1ba9-init/diff:/var/lib/docker/overlay2/e63a2c5503af6c1a5c1dd965c5cc29d76da2a1b8721a0b9206304ab209f33143/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c0e8f50da5990a6bedf8833813b7776b31886924c91cf3beb519d1c9a4cb1ba9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c0e8f50da5990a6bedf8833813b7776b31886924c91cf3beb519d1c9a4cb1ba9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c0e8f50da5990a6bedf8833813b7776b31886924c91cf3beb519d1c9a4cb1ba9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-716021",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-716021/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-716021",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-716021",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-716021",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "03994c8b6081ca867a05636fbc894d1b437e873806f955331382e284c83dfe8f",
	            "SandboxKey": "/var/run/docker/netns/03994c8b6081",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33799"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33800"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33803"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33801"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33802"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-716021": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "119e3a61963381b47c7d49b0a5a6e41f2814e895bb3e1355d0bc3404d2e8c41a",
	                    "EndpointID": "4a1e6a0450c945720070e513d0885f06d63cd7da40a3b6b3b399bad66d6be45e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-716021",
	                        "ce840d223dfa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-716021 -n old-k8s-version-716021
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-716021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-716021 logs -n 25: (2.208869241s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-180537 sudo find                             | cilium-180537             | jenkins | v1.34.0 | 07 Oct 24 13:41 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-180537 sudo crio                             | cilium-180537             | jenkins | v1.34.0 | 07 Oct 24 13:41 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-180537                                       | cilium-180537             | jenkins | v1.34.0 | 07 Oct 24 13:41 UTC | 07 Oct 24 13:41 UTC |
	| start   | -p force-systemd-env-622009                            | force-systemd-env-622009  | jenkins | v1.34.0 | 07 Oct 24 13:41 UTC | 07 Oct 24 13:42 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-040234                              | force-systemd-flag-040234 | jenkins | v1.34.0 | 07 Oct 24 13:41 UTC | 07 Oct 24 13:41 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-040234                           | force-systemd-flag-040234 | jenkins | v1.34.0 | 07 Oct 24 13:41 UTC | 07 Oct 24 13:41 UTC |
	| start   | -p cert-expiration-501751                              | cert-expiration-501751    | jenkins | v1.34.0 | 07 Oct 24 13:41 UTC | 07 Oct 24 13:42 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-622009                               | force-systemd-env-622009  | jenkins | v1.34.0 | 07 Oct 24 13:42 UTC | 07 Oct 24 13:42 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-622009                            | force-systemd-env-622009  | jenkins | v1.34.0 | 07 Oct 24 13:42 UTC | 07 Oct 24 13:42 UTC |
	| start   | -p cert-options-608723                                 | cert-options-608723       | jenkins | v1.34.0 | 07 Oct 24 13:42 UTC | 07 Oct 24 13:42 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-608723 ssh                                | cert-options-608723       | jenkins | v1.34.0 | 07 Oct 24 13:42 UTC | 07 Oct 24 13:42 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-608723 -- sudo                         | cert-options-608723       | jenkins | v1.34.0 | 07 Oct 24 13:42 UTC | 07 Oct 24 13:42 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-608723                                 | cert-options-608723       | jenkins | v1.34.0 | 07 Oct 24 13:42 UTC | 07 Oct 24 13:42 UTC |
	| start   | -p old-k8s-version-716021                              | old-k8s-version-716021    | jenkins | v1.34.0 | 07 Oct 24 13:42 UTC | 07 Oct 24 13:45 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-501751                              | cert-expiration-501751    | jenkins | v1.34.0 | 07 Oct 24 13:45 UTC | 07 Oct 24 13:45 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-716021        | old-k8s-version-716021    | jenkins | v1.34.0 | 07 Oct 24 13:45 UTC | 07 Oct 24 13:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-501751                              | cert-expiration-501751    | jenkins | v1.34.0 | 07 Oct 24 13:45 UTC | 07 Oct 24 13:45 UTC |
	| stop    | -p old-k8s-version-716021                              | old-k8s-version-716021    | jenkins | v1.34.0 | 07 Oct 24 13:45 UTC | 07 Oct 24 13:45 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| start   | -p no-preload-178678                                   | no-preload-178678         | jenkins | v1.34.0 | 07 Oct 24 13:45 UTC | 07 Oct 24 13:47 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-716021             | old-k8s-version-716021    | jenkins | v1.34.0 | 07 Oct 24 13:45 UTC | 07 Oct 24 13:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-716021                              | old-k8s-version-716021    | jenkins | v1.34.0 | 07 Oct 24 13:45 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-178678             | no-preload-178678         | jenkins | v1.34.0 | 07 Oct 24 13:47 UTC | 07 Oct 24 13:47 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-178678                                   | no-preload-178678         | jenkins | v1.34.0 | 07 Oct 24 13:47 UTC | 07 Oct 24 13:47 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-178678                  | no-preload-178678         | jenkins | v1.34.0 | 07 Oct 24 13:47 UTC | 07 Oct 24 13:47 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-178678                                   | no-preload-178678         | jenkins | v1.34.0 | 07 Oct 24 13:47 UTC | 07 Oct 24 13:52 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 13:47:23
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 13:47:23.810361  794355 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:47:23.810828  794355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:47:23.810837  794355 out.go:358] Setting ErrFile to fd 2...
	I1007 13:47:23.810842  794355 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:47:23.811371  794355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
	I1007 13:47:23.811942  794355 out.go:352] Setting JSON to false
	I1007 13:47:23.814581  794355 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12593,"bootTime":1728296251,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1007 13:47:23.814694  794355 start.go:139] virtualization:  
	I1007 13:47:23.817436  794355 out.go:177] * [no-preload-178678] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 13:47:23.821821  794355 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:47:23.821953  794355 notify.go:220] Checking for updates...
	I1007 13:47:23.825095  794355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:47:23.826730  794355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 13:47:23.828463  794355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	I1007 13:47:23.830097  794355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 13:47:23.831679  794355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:47:23.834024  794355 config.go:182] Loaded profile config "no-preload-178678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 13:47:23.834606  794355 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:47:23.864740  794355 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 13:47:23.864882  794355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:47:23.920928  794355 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 13:47:23.910636539 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:47:23.921041  794355 docker.go:318] overlay module found
	I1007 13:47:23.924415  794355 out.go:177] * Using the docker driver based on existing profile
	I1007 13:47:23.926418  794355 start.go:297] selected driver: docker
	I1007 13:47:23.926442  794355 start.go:901] validating driver "docker" against &{Name:no-preload-178678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-178678 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:47:23.926559  794355 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:47:23.927222  794355 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:47:23.975612  794355 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 13:47:23.965172091 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:47:23.976007  794355 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:47:23.976039  794355 cni.go:84] Creating CNI manager for ""
	I1007 13:47:23.976080  794355 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 13:47:23.976131  794355 start.go:340] cluster config:
	{Name:no-preload-178678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-178678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:47:23.978282  794355 out.go:177] * Starting "no-preload-178678" primary control-plane node in "no-preload-178678" cluster
	I1007 13:47:23.980343  794355 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1007 13:47:23.982028  794355 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1007 13:47:23.983934  794355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 13:47:23.984094  794355 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/config.json ...
	I1007 13:47:23.984186  794355 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 13:47:23.984469  794355 cache.go:107] acquiring lock: {Name:mk47932ca92317b50b1fd9618219d4310898a371 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:47:23.984555  794355 cache.go:115] /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1007 13:47:23.984564  794355 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.292µs
	I1007 13:47:23.984580  794355 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1007 13:47:23.984590  794355 cache.go:107] acquiring lock: {Name:mk68e4312cd665d68e30be9478800e1fcc85644f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:47:23.984621  794355 cache.go:115] /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1007 13:47:23.984626  794355 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 37.76µs
	I1007 13:47:23.984632  794355 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1007 13:47:23.984640  794355 cache.go:107] acquiring lock: {Name:mk4a70f876837ca2fe07cd7bd9abcdbca806555d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:47:23.984668  794355 cache.go:115] /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1007 13:47:23.984673  794355 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 33.796µs
	I1007 13:47:23.984679  794355 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1007 13:47:23.984691  794355 cache.go:107] acquiring lock: {Name:mk775e67e03da628dccad10730e434256c53046f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:47:23.984717  794355 cache.go:115] /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1007 13:47:23.984721  794355 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 35.2µs
	I1007 13:47:23.984727  794355 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1007 13:47:23.984741  794355 cache.go:107] acquiring lock: {Name:mk4908d036fcd399d3b6c35dd16f32e979c76a8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:47:23.984766  794355 cache.go:115] /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1007 13:47:23.984771  794355 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 36.627µs
	I1007 13:47:23.984777  794355 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1007 13:47:23.984785  794355 cache.go:107] acquiring lock: {Name:mk82dcbc55bf1bbc35605c317a49b0f041074033 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:47:23.984813  794355 cache.go:115] /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1007 13:47:23.984818  794355 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 34.28µs
	I1007 13:47:23.984823  794355 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1007 13:47:23.984831  794355 cache.go:107] acquiring lock: {Name:mk9f9e65c5c261c0ed0a0726ee7bd5e31b098711 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:47:23.984860  794355 cache.go:115] /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1007 13:47:23.984865  794355 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 34.527µs
	I1007 13:47:23.984870  794355 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1007 13:47:23.984878  794355 cache.go:107] acquiring lock: {Name:mk89b5997abefd8b1e88f23a6e013d87b7b19de5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:47:23.984902  794355 cache.go:115] /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1007 13:47:23.984906  794355 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 28.931µs
	I1007 13:47:23.984911  794355 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/18424-574640/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1007 13:47:23.984916  794355 cache.go:87] Successfully saved all images to host disk.
	I1007 13:47:24.009591  794355 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1007 13:47:24.009623  794355 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1007 13:47:24.009639  794355 cache.go:194] Successfully downloaded all kic artifacts
	I1007 13:47:24.009733  794355 start.go:360] acquireMachinesLock for no-preload-178678: {Name:mkf96e7e9a12542ced4fdaa4061e6224babb39c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1007 13:47:24.009807  794355 start.go:364] duration metric: took 53.201µs to acquireMachinesLock for "no-preload-178678"
	I1007 13:47:24.009831  794355 start.go:96] Skipping create...Using existing machine configuration
	I1007 13:47:24.009837  794355 fix.go:54] fixHost starting: 
	I1007 13:47:24.010130  794355 cli_runner.go:164] Run: docker container inspect no-preload-178678 --format={{.State.Status}}
	I1007 13:47:24.029838  794355 fix.go:112] recreateIfNeeded on no-preload-178678: state=Stopped err=<nil>
	W1007 13:47:24.029881  794355 fix.go:138] unexpected machine state, will restart: <nil>
	I1007 13:47:24.032276  794355 out.go:177] * Restarting existing docker container for "no-preload-178678" ...
	I1007 13:47:23.313778  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:25.811416  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:24.034064  794355 cli_runner.go:164] Run: docker start no-preload-178678
	I1007 13:47:24.391631  794355 cli_runner.go:164] Run: docker container inspect no-preload-178678 --format={{.State.Status}}
	I1007 13:47:24.414879  794355 kic.go:430] container "no-preload-178678" state is running.
	I1007 13:47:24.415258  794355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178678
	I1007 13:47:24.439534  794355 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/config.json ...
	I1007 13:47:24.439793  794355 machine.go:93] provisionDockerMachine start ...
	I1007 13:47:24.439918  794355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178678
	I1007 13:47:24.467643  794355 main.go:141] libmachine: Using SSH client type: native
	I1007 13:47:24.467906  794355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1007 13:47:24.467917  794355 main.go:141] libmachine: About to run SSH command:
	hostname
	I1007 13:47:24.468674  794355 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47112->127.0.0.1:33804: read: connection reset by peer
	I1007 13:47:27.606024  794355 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-178678
	
	I1007 13:47:27.606056  794355 ubuntu.go:169] provisioning hostname "no-preload-178678"
	I1007 13:47:27.606120  794355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178678
	I1007 13:47:27.624173  794355 main.go:141] libmachine: Using SSH client type: native
	I1007 13:47:27.624431  794355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1007 13:47:27.624443  794355 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-178678 && echo "no-preload-178678" | sudo tee /etc/hostname
	I1007 13:47:27.771307  794355 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-178678
	
	I1007 13:47:27.771396  794355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178678
	I1007 13:47:27.790716  794355 main.go:141] libmachine: Using SSH client type: native
	I1007 13:47:27.790976  794355 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413c00] 0x416440 <nil>  [] 0s} 127.0.0.1 33804 <nil> <nil>}
	I1007 13:47:27.791000  794355 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-178678' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-178678/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-178678' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1007 13:47:27.967015  794355 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1007 13:47:27.967044  794355 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18424-574640/.minikube CaCertPath:/home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18424-574640/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18424-574640/.minikube}
	I1007 13:47:27.967128  794355 ubuntu.go:177] setting up certificates
	I1007 13:47:27.967140  794355 provision.go:84] configureAuth start
	I1007 13:47:27.967224  794355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178678
	I1007 13:47:27.990493  794355 provision.go:143] copyHostCerts
	I1007 13:47:27.990560  794355 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-574640/.minikube/key.pem, removing ...
	I1007 13:47:27.990570  794355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-574640/.minikube/key.pem
	I1007 13:47:27.990642  794355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18424-574640/.minikube/key.pem (1679 bytes)
	I1007 13:47:27.990749  794355 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-574640/.minikube/ca.pem, removing ...
	I1007 13:47:27.990755  794355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-574640/.minikube/ca.pem
	I1007 13:47:27.990782  794355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18424-574640/.minikube/ca.pem (1082 bytes)
	I1007 13:47:27.990842  794355 exec_runner.go:144] found /home/jenkins/minikube-integration/18424-574640/.minikube/cert.pem, removing ...
	I1007 13:47:27.990847  794355 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18424-574640/.minikube/cert.pem
	I1007 13:47:27.990871  794355 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18424-574640/.minikube/cert.pem (1123 bytes)
	I1007 13:47:27.990925  794355 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18424-574640/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca-key.pem org=jenkins.no-preload-178678 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-178678]
	I1007 13:47:28.361168  794355 provision.go:177] copyRemoteCerts
	I1007 13:47:28.361294  794355 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1007 13:47:28.361359  794355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178678
	I1007 13:47:28.380640  794355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/no-preload-178678/id_rsa Username:docker}
	I1007 13:47:28.479931  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1007 13:47:28.508757  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1007 13:47:28.537154  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1007 13:47:28.566746  794355 provision.go:87] duration metric: took 599.574577ms to configureAuth
	I1007 13:47:28.566818  794355 ubuntu.go:193] setting minikube options for container-runtime
	I1007 13:47:28.567050  794355 config.go:182] Loaded profile config "no-preload-178678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 13:47:28.567065  794355 machine.go:96] duration metric: took 4.127264548s to provisionDockerMachine
	I1007 13:47:28.567075  794355 start.go:293] postStartSetup for "no-preload-178678" (driver="docker")
	I1007 13:47:28.567105  794355 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1007 13:47:28.567169  794355 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1007 13:47:28.567214  794355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178678
	I1007 13:47:28.584605  794355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/no-preload-178678/id_rsa Username:docker}
	I1007 13:47:28.684107  794355 ssh_runner.go:195] Run: cat /etc/os-release
	I1007 13:47:28.687937  794355 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1007 13:47:28.687994  794355 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1007 13:47:28.688005  794355 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1007 13:47:28.688020  794355 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1007 13:47:28.688037  794355 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-574640/.minikube/addons for local assets ...
	I1007 13:47:28.688122  794355 filesync.go:126] Scanning /home/jenkins/minikube-integration/18424-574640/.minikube/files for local assets ...
	I1007 13:47:28.688236  794355 filesync.go:149] local asset: /home/jenkins/minikube-integration/18424-574640/.minikube/files/etc/ssl/certs/5801632.pem -> 5801632.pem in /etc/ssl/certs
	I1007 13:47:28.688388  794355 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1007 13:47:28.699199  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/files/etc/ssl/certs/5801632.pem --> /etc/ssl/certs/5801632.pem (1708 bytes)
	I1007 13:47:28.725552  794355 start.go:296] duration metric: took 158.442436ms for postStartSetup
	I1007 13:47:28.725654  794355 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:47:28.725732  794355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178678
	I1007 13:47:28.743441  794355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/no-preload-178678/id_rsa Username:docker}
	I1007 13:47:28.839347  794355 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1007 13:47:28.844212  794355 fix.go:56] duration metric: took 4.834367139s for fixHost
	I1007 13:47:28.844242  794355 start.go:83] releasing machines lock for "no-preload-178678", held for 4.834424615s
	I1007 13:47:28.844334  794355 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-178678
	I1007 13:47:28.861884  794355 ssh_runner.go:195] Run: cat /version.json
	I1007 13:47:28.861945  794355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178678
	I1007 13:47:28.862042  794355 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1007 13:47:28.862107  794355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178678
	I1007 13:47:28.883575  794355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/no-preload-178678/id_rsa Username:docker}
	I1007 13:47:28.895565  794355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/no-preload-178678/id_rsa Username:docker}
	I1007 13:47:28.981762  794355 ssh_runner.go:195] Run: systemctl --version
	I1007 13:47:29.138712  794355 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1007 13:47:29.143849  794355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1007 13:47:29.162519  794355 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1007 13:47:29.162620  794355 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1007 13:47:29.172007  794355 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1007 13:47:29.172035  794355 start.go:495] detecting cgroup driver to use...
	I1007 13:47:29.172081  794355 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1007 13:47:29.172136  794355 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1007 13:47:29.187282  794355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1007 13:47:29.200803  794355 docker.go:217] disabling cri-docker service (if available) ...
	I1007 13:47:29.200885  794355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1007 13:47:29.214549  794355 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1007 13:47:29.227139  794355 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1007 13:47:29.319427  794355 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1007 13:47:29.421165  794355 docker.go:233] disabling docker service ...
	I1007 13:47:29.421255  794355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1007 13:47:29.435766  794355 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1007 13:47:29.450913  794355 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1007 13:47:29.571586  794355 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1007 13:47:29.667308  794355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1007 13:47:29.680765  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1007 13:47:29.705879  794355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1007 13:47:29.718006  794355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1007 13:47:29.729206  794355 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1007 13:47:29.729320  794355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1007 13:47:29.741315  794355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 13:47:29.754021  794355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1007 13:47:29.766851  794355 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1007 13:47:29.784810  794355 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1007 13:47:29.796048  794355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1007 13:47:29.809014  794355 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1007 13:47:29.821019  794355 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1007 13:47:29.832919  794355 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1007 13:47:29.844310  794355 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1007 13:47:29.855015  794355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:47:29.958884  794355 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1007 13:47:30.194317  794355 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1007 13:47:30.194432  794355 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1007 13:47:30.199470  794355 start.go:563] Will wait 60s for crictl version
	I1007 13:47:30.199552  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:47:30.204959  794355 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1007 13:47:30.257803  794355 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1007 13:47:30.257943  794355 ssh_runner.go:195] Run: containerd --version
	I1007 13:47:30.285699  794355 ssh_runner.go:195] Run: containerd --version
	I1007 13:47:30.321098  794355 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1007 13:47:30.323575  794355 cli_runner.go:164] Run: docker network inspect no-preload-178678 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1007 13:47:30.339448  794355 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1007 13:47:30.343355  794355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:47:30.354634  794355 kubeadm.go:883] updating cluster {Name:no-preload-178678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-178678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1007 13:47:30.354760  794355 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 13:47:30.354811  794355 ssh_runner.go:195] Run: sudo crictl images --output json
	I1007 13:47:30.391637  794355 containerd.go:627] all images are preloaded for containerd runtime.
	I1007 13:47:30.391665  794355 cache_images.go:84] Images are preloaded, skipping loading
	I1007 13:47:30.391674  794355 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I1007 13:47:30.391777  794355 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-178678 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-178678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1007 13:47:30.391846  794355 ssh_runner.go:195] Run: sudo crictl info
	I1007 13:47:30.433946  794355 cni.go:84] Creating CNI manager for ""
	I1007 13:47:30.433973  794355 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 13:47:30.433986  794355 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1007 13:47:30.434009  794355 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-178678 NodeName:no-preload-178678 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1007 13:47:30.434181  794355 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-178678"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1007 13:47:30.434257  794355 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1007 13:47:30.446898  794355 binaries.go:44] Found k8s binaries, skipping transfer
	I1007 13:47:30.446973  794355 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1007 13:47:30.456905  794355 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1007 13:47:30.477199  794355 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1007 13:47:30.498147  794355 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I1007 13:47:30.518864  794355 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1007 13:47:30.522798  794355 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1007 13:47:30.534421  794355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:47:30.626595  794355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:47:30.643582  794355 certs.go:68] Setting up /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678 for IP: 192.168.85.2
	I1007 13:47:30.643610  794355 certs.go:194] generating shared ca certs ...
	I1007 13:47:30.643627  794355 certs.go:226] acquiring lock for ca certs: {Name:mkb94cd23ae3efb673f2949842bd2c98014816e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:47:30.643875  794355 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18424-574640/.minikube/ca.key
	I1007 13:47:30.643926  794355 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.key
	I1007 13:47:30.643937  794355 certs.go:256] generating profile certs ...
	I1007 13:47:30.644027  794355 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.key
	I1007 13:47:30.644109  794355 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/apiserver.key.e19e5b20
	I1007 13:47:30.644160  794355 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/proxy-client.key
	I1007 13:47:30.644300  794355 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/580163.pem (1338 bytes)
	W1007 13:47:30.644337  794355 certs.go:480] ignoring /home/jenkins/minikube-integration/18424-574640/.minikube/certs/580163_empty.pem, impossibly tiny 0 bytes
	I1007 13:47:30.644350  794355 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca-key.pem (1679 bytes)
	I1007 13:47:30.644374  794355 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/ca.pem (1082 bytes)
	I1007 13:47:30.644400  794355 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/cert.pem (1123 bytes)
	I1007 13:47:30.644426  794355 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/certs/key.pem (1679 bytes)
	I1007 13:47:30.644480  794355 certs.go:484] found cert: /home/jenkins/minikube-integration/18424-574640/.minikube/files/etc/ssl/certs/5801632.pem (1708 bytes)
	I1007 13:47:30.645181  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1007 13:47:30.675508  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1007 13:47:30.702796  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1007 13:47:30.728701  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1007 13:47:30.763284  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1007 13:47:30.815229  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1007 13:47:30.854680  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1007 13:47:30.887250  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1007 13:47:30.917899  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/certs/580163.pem --> /usr/share/ca-certificates/580163.pem (1338 bytes)
	I1007 13:47:30.953734  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/files/etc/ssl/certs/5801632.pem --> /usr/share/ca-certificates/5801632.pem (1708 bytes)
	I1007 13:47:30.983063  794355 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18424-574640/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1007 13:47:31.020416  794355 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1007 13:47:31.041762  794355 ssh_runner.go:195] Run: openssl version
	I1007 13:47:31.050558  794355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/580163.pem && ln -fs /usr/share/ca-certificates/580163.pem /etc/ssl/certs/580163.pem"
	I1007 13:47:31.062497  794355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/580163.pem
	I1007 13:47:31.066649  794355 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  7 13:06 /usr/share/ca-certificates/580163.pem
	I1007 13:47:31.066749  794355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/580163.pem
	I1007 13:47:31.074290  794355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/580163.pem /etc/ssl/certs/51391683.0"
	I1007 13:47:31.084762  794355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5801632.pem && ln -fs /usr/share/ca-certificates/5801632.pem /etc/ssl/certs/5801632.pem"
	I1007 13:47:31.095956  794355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5801632.pem
	I1007 13:47:31.100004  794355 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  7 13:06 /usr/share/ca-certificates/5801632.pem
	I1007 13:47:31.100080  794355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5801632.pem
	I1007 13:47:31.107952  794355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5801632.pem /etc/ssl/certs/3ec20f2e.0"
	I1007 13:47:31.119141  794355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1007 13:47:31.130515  794355 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:47:31.134970  794355 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  7 12:56 /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:47:31.135043  794355 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1007 13:47:31.143074  794355 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1007 13:47:31.153475  794355 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1007 13:47:31.157431  794355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1007 13:47:31.164602  794355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1007 13:47:31.172307  794355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1007 13:47:31.179676  794355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1007 13:47:31.187428  794355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1007 13:47:31.195719  794355 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1007 13:47:31.202836  794355 kubeadm.go:392] StartCluster: {Name:no-preload-178678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-178678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkin
s:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:47:31.202941  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1007 13:47:31.203005  794355 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1007 13:47:31.259266  794355 cri.go:89] found id: "dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f"
	I1007 13:47:31.259292  794355 cri.go:89] found id: "3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3"
	I1007 13:47:31.259296  794355 cri.go:89] found id: "00c6ca6d455614c7cf1a401b1f8b7521f049d2b692610659ae4683bce57d9e32"
	I1007 13:47:31.259301  794355 cri.go:89] found id: "b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623"
	I1007 13:47:31.259306  794355 cri.go:89] found id: "bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93"
	I1007 13:47:31.259311  794355 cri.go:89] found id: "707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389"
	I1007 13:47:31.259314  794355 cri.go:89] found id: "c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf"
	I1007 13:47:31.259317  794355 cri.go:89] found id: "1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb"
	I1007 13:47:31.259320  794355 cri.go:89] found id: ""
	I1007 13:47:31.259391  794355 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1007 13:47:31.291954  794355 cri.go:116] JSON = null
	W1007 13:47:31.292020  794355 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1007 13:47:31.292106  794355 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1007 13:47:31.308877  794355 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1007 13:47:31.308901  794355 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1007 13:47:31.308964  794355 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1007 13:47:31.325526  794355 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1007 13:47:31.326225  794355 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-178678" does not appear in /home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 13:47:31.326534  794355 kubeconfig.go:62] /home/jenkins/minikube-integration/18424-574640/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-178678" cluster setting kubeconfig missing "no-preload-178678" context setting]
	I1007 13:47:31.331527  794355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/kubeconfig: {Name:mk8cb646df388630470eb87db824f7b511497a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:47:31.337102  794355 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1007 13:47:31.369933  794355 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I1007 13:47:31.369970  794355 kubeadm.go:597] duration metric: took 61.063115ms to restartPrimaryControlPlane
	I1007 13:47:31.369980  794355 kubeadm.go:394] duration metric: took 167.161554ms to StartCluster
	I1007 13:47:31.369996  794355 settings.go:142] acquiring lock: {Name:mk8a7c208419d2453ea37ed5e7d0421609f0d046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:47:31.370064  794355 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 13:47:31.371135  794355 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/kubeconfig: {Name:mk8cb646df388630470eb87db824f7b511497a09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 13:47:31.371338  794355 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1007 13:47:31.371727  794355 config.go:182] Loaded profile config "no-preload-178678": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 13:47:31.371795  794355 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1007 13:47:31.371917  794355 addons.go:69] Setting storage-provisioner=true in profile "no-preload-178678"
	I1007 13:47:31.371937  794355 addons.go:234] Setting addon storage-provisioner=true in "no-preload-178678"
	W1007 13:47:31.371947  794355 addons.go:243] addon storage-provisioner should already be in state true
	I1007 13:47:31.371974  794355 host.go:66] Checking if "no-preload-178678" exists ...
	I1007 13:47:31.371982  794355 addons.go:69] Setting default-storageclass=true in profile "no-preload-178678"
	I1007 13:47:31.371997  794355 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-178678"
	I1007 13:47:31.372331  794355 cli_runner.go:164] Run: docker container inspect no-preload-178678 --format={{.State.Status}}
	I1007 13:47:31.372455  794355 cli_runner.go:164] Run: docker container inspect no-preload-178678 --format={{.State.Status}}
	I1007 13:47:31.374944  794355 addons.go:69] Setting dashboard=true in profile "no-preload-178678"
	I1007 13:47:31.376351  794355 addons.go:234] Setting addon dashboard=true in "no-preload-178678"
	W1007 13:47:31.376371  794355 addons.go:243] addon dashboard should already be in state true
	I1007 13:47:31.376403  794355 host.go:66] Checking if "no-preload-178678" exists ...
	I1007 13:47:31.376864  794355 cli_runner.go:164] Run: docker container inspect no-preload-178678 --format={{.State.Status}}
	I1007 13:47:31.378500  794355 out.go:177] * Verifying Kubernetes components...
	I1007 13:47:31.376253  794355 addons.go:69] Setting metrics-server=true in profile "no-preload-178678"
	I1007 13:47:31.383662  794355 addons.go:234] Setting addon metrics-server=true in "no-preload-178678"
	W1007 13:47:31.383679  794355 addons.go:243] addon metrics-server should already be in state true
	I1007 13:47:31.383722  794355 host.go:66] Checking if "no-preload-178678" exists ...
	I1007 13:47:31.384216  794355 cli_runner.go:164] Run: docker container inspect no-preload-178678 --format={{.State.Status}}
	I1007 13:47:31.385812  794355 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1007 13:47:31.435881  794355 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1007 13:47:31.438640  794355 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:47:31.438672  794355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1007 13:47:31.438771  794355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178678
	I1007 13:47:31.443727  794355 addons.go:234] Setting addon default-storageclass=true in "no-preload-178678"
	W1007 13:47:31.443748  794355 addons.go:243] addon default-storageclass should already be in state true
	I1007 13:47:31.443774  794355 host.go:66] Checking if "no-preload-178678" exists ...
	I1007 13:47:31.444207  794355 cli_runner.go:164] Run: docker container inspect no-preload-178678 --format={{.State.Status}}
	I1007 13:47:31.453313  794355 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1007 13:47:31.456404  794355 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1007 13:47:31.460857  794355 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1007 13:47:31.460885  794355 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1007 13:47:31.460975  794355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178678
	I1007 13:47:31.489475  794355 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1007 13:47:27.820147  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:29.822185  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:32.312523  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:31.491566  794355 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1007 13:47:31.491598  794355 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1007 13:47:31.491688  794355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178678
	I1007 13:47:31.517008  794355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/no-preload-178678/id_rsa Username:docker}
	I1007 13:47:31.522345  794355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/no-preload-178678/id_rsa Username:docker}
	I1007 13:47:31.535423  794355 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1007 13:47:31.535450  794355 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1007 13:47:31.535521  794355 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-178678
	I1007 13:47:31.563334  794355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/no-preload-178678/id_rsa Username:docker}
	I1007 13:47:31.579597  794355 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33804 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/no-preload-178678/id_rsa Username:docker}
	I1007 13:47:31.656014  794355 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1007 13:47:31.701611  794355 node_ready.go:35] waiting up to 6m0s for node "no-preload-178678" to be "Ready" ...
	I1007 13:47:31.864648  794355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:47:31.870187  794355 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1007 13:47:31.870216  794355 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1007 13:47:31.886446  794355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:47:31.996508  794355 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1007 13:47:31.996585  794355 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1007 13:47:32.000095  794355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1007 13:47:32.000196  794355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1007 13:47:32.189487  794355 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1007 13:47:32.189565  794355 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1007 13:47:32.277719  794355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1007 13:47:32.277818  794355 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1007 13:47:32.300025  794355 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1007 13:47:32.300112  794355 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1007 13:47:32.382556  794355 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1007 13:47:32.382653  794355 retry.go:31] will retry after 211.563357ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1007 13:47:32.422375  794355 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1007 13:47:32.422471  794355 retry.go:31] will retry after 296.753315ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1007 13:47:32.456997  794355 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:47:32.457076  794355 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1007 13:47:32.472338  794355 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1007 13:47:32.472427  794355 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1007 13:47:32.553315  794355 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1007 13:47:32.553402  794355 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1007 13:47:32.556352  794355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1007 13:47:32.594666  794355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1007 13:47:32.643749  794355 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1007 13:47:32.643778  794355 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1007 13:47:32.719721  794355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1007 13:47:32.823193  794355 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1007 13:47:32.823219  794355 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1007 13:47:33.055548  794355 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 13:47:33.055579  794355 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1007 13:47:33.202206  794355 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1007 13:47:34.314786  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:36.824207  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:36.651662  794355 node_ready.go:49] node "no-preload-178678" has status "Ready":"True"
	I1007 13:47:36.651700  794355 node_ready.go:38] duration metric: took 4.950048149s for node "no-preload-178678" to be "Ready" ...
	I1007 13:47:36.651712  794355 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:47:36.678600  794355 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zc8p4" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:36.697266  794355 pod_ready.go:93] pod "coredns-7c65d6cfc9-zc8p4" in "kube-system" namespace has status "Ready":"True"
	I1007 13:47:36.697294  794355 pod_ready.go:82] duration metric: took 18.653808ms for pod "coredns-7c65d6cfc9-zc8p4" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:36.697307  794355 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-178678" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:36.704874  794355 pod_ready.go:93] pod "etcd-no-preload-178678" in "kube-system" namespace has status "Ready":"True"
	I1007 13:47:36.704901  794355 pod_ready.go:82] duration metric: took 7.586574ms for pod "etcd-no-preload-178678" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:36.704917  794355 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-178678" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:36.736067  794355 pod_ready.go:93] pod "kube-apiserver-no-preload-178678" in "kube-system" namespace has status "Ready":"True"
	I1007 13:47:36.736096  794355 pod_ready.go:82] duration metric: took 31.17053ms for pod "kube-apiserver-no-preload-178678" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:36.736110  794355 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-178678" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:36.749371  794355 pod_ready.go:93] pod "kube-controller-manager-no-preload-178678" in "kube-system" namespace has status "Ready":"True"
	I1007 13:47:36.749398  794355 pod_ready.go:82] duration metric: took 13.278925ms for pod "kube-controller-manager-no-preload-178678" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:36.749411  794355 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-46xb9" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:36.862814  794355 pod_ready.go:93] pod "kube-proxy-46xb9" in "kube-system" namespace has status "Ready":"True"
	I1007 13:47:36.862842  794355 pod_ready.go:82] duration metric: took 113.423594ms for pod "kube-proxy-46xb9" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:36.862856  794355 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-178678" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:38.869360  794355 pod_ready.go:103] pod "kube-scheduler-no-preload-178678" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:39.672313  794355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.115867069s)
	I1007 13:47:39.672357  794355 addons.go:475] Verifying addon metrics-server=true in "no-preload-178678"
	I1007 13:47:39.802921  794355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.208165181s)
	I1007 13:47:39.802988  794355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (7.083238741s)
	I1007 13:47:39.803242  794355 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.600999886s)
	I1007 13:47:39.805256  794355 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-178678 addons enable metrics-server
	
	I1007 13:47:39.814087  794355 out.go:177] * Enabled addons: metrics-server, storage-provisioner, dashboard, default-storageclass
	I1007 13:47:39.312769  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:41.811539  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:39.815958  794355 addons.go:510] duration metric: took 8.444172351s for enable addons: enabled=[metrics-server storage-provisioner dashboard default-storageclass]
	I1007 13:47:41.368908  794355 pod_ready.go:103] pod "kube-scheduler-no-preload-178678" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:43.370334  794355 pod_ready.go:103] pod "kube-scheduler-no-preload-178678" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:43.819839  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:46.311911  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:45.380852  794355 pod_ready.go:103] pod "kube-scheduler-no-preload-178678" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:46.869223  794355 pod_ready.go:93] pod "kube-scheduler-no-preload-178678" in "kube-system" namespace has status "Ready":"True"
	I1007 13:47:46.869298  794355 pod_ready.go:82] duration metric: took 10.006432803s for pod "kube-scheduler-no-preload-178678" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:46.869326  794355 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:48.312191  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:50.811830  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:48.876924  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:50.878450  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:53.375602  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:53.310894  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:55.311229  788969 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:56.311516  788969 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"True"
	I1007 13:47:56.311544  788969 pod_ready.go:82] duration metric: took 1m20.007043994s for pod "kube-controller-manager-old-k8s-version-716021" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:56.311557  788969 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hdch9" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:56.317840  788969 pod_ready.go:93] pod "kube-proxy-hdch9" in "kube-system" namespace has status "Ready":"True"
	I1007 13:47:56.317869  788969 pod_ready.go:82] duration metric: took 6.304821ms for pod "kube-proxy-hdch9" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:56.317882  788969 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-716021" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:56.323346  788969 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-716021" in "kube-system" namespace has status "Ready":"True"
	I1007 13:47:56.323374  788969 pod_ready.go:82] duration metric: took 5.462529ms for pod "kube-scheduler-old-k8s-version-716021" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:56.323387  788969 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace to be "Ready" ...
	I1007 13:47:55.375810  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:57.876325  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:47:58.330847  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:00.496638  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:00.500870  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:02.875737  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:02.830995  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:05.330608  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:07.378265  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:04.875871  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:07.376652  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:09.829529  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:11.830063  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:09.875782  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:12.375731  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:13.830398  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:16.330624  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:14.377047  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:16.875540  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:18.829935  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:20.834962  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:18.876235  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:20.881655  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:23.375713  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:23.329605  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:25.330833  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:25.875084  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:27.875632  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:27.831879  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:30.331107  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:30.375161  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:32.379649  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:32.829901  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:35.330367  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:37.330969  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:34.876324  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:36.877418  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:39.829226  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:41.829841  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:39.376024  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:41.377353  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:44.329644  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:46.330697  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:43.875815  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:46.375861  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:48.376033  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:48.330894  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:50.829621  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:50.376120  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:52.879026  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:52.830554  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:55.329892  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:55.375383  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:57.376247  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:57.832139  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:00.381288  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:48:59.875695  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:02.376498  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:02.829594  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:04.831500  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:07.329846  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:04.876570  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:07.376065  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:09.329971  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:11.330024  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:09.376634  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:11.877493  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:13.830806  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:15.831685  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:14.375363  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:16.376066  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:17.853965  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:20.330145  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:22.330226  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:18.876063  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:21.375742  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:23.376392  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:24.829579  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:26.829872  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:25.875584  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:28.375008  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:28.830038  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:30.832511  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:30.375202  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:32.375986  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:33.330702  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:35.331136  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:34.878999  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:37.376680  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:37.834831  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:40.330871  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:42.334259  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:39.876158  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:42.376774  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:44.830040  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:46.831538  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:44.876244  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:47.376360  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:49.329922  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:51.330390  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:49.875683  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:52.377454  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:53.829974  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:56.330400  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:54.876198  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:57.375504  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:58.829116  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:00.830835  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:49:59.376484  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:01.383103  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:03.329964  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:05.330720  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:03.875764  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:05.879191  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:08.376931  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:07.833777  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:10.331002  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:10.876186  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:13.375323  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:12.830194  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:14.831603  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:17.330076  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:15.375763  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:17.876544  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:19.330317  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:21.330539  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:20.375754  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:22.376043  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:23.829440  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:25.832290  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:24.875715  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:26.875766  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:28.330077  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:30.330410  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:28.876547  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:31.377901  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:32.830347  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:35.329841  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:37.330605  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:33.875499  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:35.876753  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:38.376127  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:39.333940  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:41.830665  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:40.875079  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:42.875466  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:44.329814  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:46.330801  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:44.876007  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:47.375299  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:48.835715  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:51.330141  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:49.375828  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:51.376034  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:53.330922  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:55.830596  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:53.875833  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:56.375036  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:58.376603  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:50:57.833565  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:00.410864  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:00.394556  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:02.875913  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:02.830510  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:05.330789  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:04.876428  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:07.375779  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:07.831297  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:10.330220  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:12.330573  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:09.876149  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:11.876601  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:14.830258  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:17.330295  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:14.375508  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:16.875554  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:19.330834  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:21.829496  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:18.875717  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:20.876197  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:23.376025  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:23.829855  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:25.830331  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:25.376185  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:27.875838  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:27.839887  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:30.335084  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:30.376432  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:32.874961  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:32.829640  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:34.830683  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:37.329315  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:34.875811  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:36.876256  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:39.330613  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:41.829295  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:39.375427  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:41.375736  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:43.375871  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:43.833110  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:46.329539  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:45.378865  794355 pod_ready.go:103] pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:46.876514  794355 pod_ready.go:82] duration metric: took 4m0.007156897s for pod "metrics-server-6867b74b74-86hgb" in "kube-system" namespace to be "Ready" ...
	E1007 13:51:46.876544  794355 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1007 13:51:46.876554  794355 pod_ready.go:39] duration metric: took 4m10.224827266s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:51:46.876570  794355 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:51:46.876604  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:51:46.876668  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:51:46.928843  794355 cri.go:89] found id: "43bdf561a58ff5fabe544dd85759243717c93e1c15c722fe3c0a318ca95f5b0d"
	I1007 13:51:46.928878  794355 cri.go:89] found id: "bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93"
	I1007 13:51:46.928883  794355 cri.go:89] found id: ""
	I1007 13:51:46.928891  794355 logs.go:282] 2 containers: [43bdf561a58ff5fabe544dd85759243717c93e1c15c722fe3c0a318ca95f5b0d bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93]
	I1007 13:51:46.928960  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:46.932727  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:46.936597  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1007 13:51:46.936695  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:51:46.990641  794355 cri.go:89] found id: "33903c7a377f2b08b0fecf23a0c14df9bb3a9c9ec0f64bfa2c22ba01a2ff9d9b"
	I1007 13:51:46.990667  794355 cri.go:89] found id: "707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389"
	I1007 13:51:46.990683  794355 cri.go:89] found id: ""
	I1007 13:51:46.990692  794355 logs.go:282] 2 containers: [33903c7a377f2b08b0fecf23a0c14df9bb3a9c9ec0f64bfa2c22ba01a2ff9d9b 707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389]
	I1007 13:51:46.990762  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:46.994672  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:46.998432  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1007 13:51:46.998517  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:51:47.038728  794355 cri.go:89] found id: "9a5e89ef652822e1f49fb89e396dfb759057f81942cd30388e30068d53d89879"
	I1007 13:51:47.038754  794355 cri.go:89] found id: "dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f"
	I1007 13:51:47.038759  794355 cri.go:89] found id: ""
	I1007 13:51:47.038767  794355 logs.go:282] 2 containers: [9a5e89ef652822e1f49fb89e396dfb759057f81942cd30388e30068d53d89879 dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f]
	I1007 13:51:47.038823  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.042991  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.046830  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:51:47.046904  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:51:47.091734  794355 cri.go:89] found id: "cd590f5f9108ec9765f17072f326f44cd3c8ab302521c32f26c090cdd981b6f6"
	I1007 13:51:47.091757  794355 cri.go:89] found id: "c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf"
	I1007 13:51:47.091761  794355 cri.go:89] found id: ""
	I1007 13:51:47.091769  794355 logs.go:282] 2 containers: [cd590f5f9108ec9765f17072f326f44cd3c8ab302521c32f26c090cdd981b6f6 c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf]
	I1007 13:51:47.091825  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.095655  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.099483  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:51:47.099608  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:51:47.139376  794355 cri.go:89] found id: "42bfb9b45ce705d6a68018440dc9e8cb4f11c7bce3abd38a8ea5acab2ef57a6e"
	I1007 13:51:47.139400  794355 cri.go:89] found id: "b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623"
	I1007 13:51:47.139405  794355 cri.go:89] found id: ""
	I1007 13:51:47.139413  794355 logs.go:282] 2 containers: [42bfb9b45ce705d6a68018440dc9e8cb4f11c7bce3abd38a8ea5acab2ef57a6e b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623]
	I1007 13:51:47.139470  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.143682  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.147465  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:51:47.147540  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:51:47.194440  794355 cri.go:89] found id: "ab0b956db3468c6fd8893d641eb706709d5cd0eedcbe8aaee7fe806586346a40"
	I1007 13:51:47.194463  794355 cri.go:89] found id: "1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb"
	I1007 13:51:47.194468  794355 cri.go:89] found id: ""
	I1007 13:51:47.194475  794355 logs.go:282] 2 containers: [ab0b956db3468c6fd8893d641eb706709d5cd0eedcbe8aaee7fe806586346a40 1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb]
	I1007 13:51:47.194569  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.198421  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.201770  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1007 13:51:47.201886  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:51:47.253639  794355 cri.go:89] found id: "8eaabe7d610941a8182facd3522da4f8fe7135414350c9246a55dd84e51dc457"
	I1007 13:51:47.253778  794355 cri.go:89] found id: "3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3"
	I1007 13:51:47.253799  794355 cri.go:89] found id: ""
	I1007 13:51:47.253814  794355 logs.go:282] 2 containers: [8eaabe7d610941a8182facd3522da4f8fe7135414350c9246a55dd84e51dc457 3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3]
	I1007 13:51:47.253884  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.258080  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.261648  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:51:47.261769  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:51:47.307818  794355 cri.go:89] found id: "26f59d548b181b4d23dbd64124417687da7c0d30cffc04a0fddcbe9c4ad969cb"
	I1007 13:51:47.307845  794355 cri.go:89] found id: ""
	I1007 13:51:47.307854  794355 logs.go:282] 1 containers: [26f59d548b181b4d23dbd64124417687da7c0d30cffc04a0fddcbe9c4ad969cb]
	I1007 13:51:47.307930  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.311840  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1007 13:51:47.311935  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1007 13:51:47.354816  794355 cri.go:89] found id: "6dd1df3ff1615192d2ec04bf57a163244d17bfb455a44288e257159641f69817"
	I1007 13:51:47.354840  794355 cri.go:89] found id: "f1587851443378e689dc16812fa6726ea857f90f4d96b15adc0b5e91ddbf6286"
	I1007 13:51:47.354846  794355 cri.go:89] found id: ""
	I1007 13:51:47.354854  794355 logs.go:282] 2 containers: [6dd1df3ff1615192d2ec04bf57a163244d17bfb455a44288e257159641f69817 f1587851443378e689dc16812fa6726ea857f90f4d96b15adc0b5e91ddbf6286]
	I1007 13:51:47.354927  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.358550  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:47.362166  794355 logs.go:123] Gathering logs for coredns [9a5e89ef652822e1f49fb89e396dfb759057f81942cd30388e30068d53d89879] ...
	I1007 13:51:47.362197  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a5e89ef652822e1f49fb89e396dfb759057f81942cd30388e30068d53d89879"
	I1007 13:51:47.411340  794355 logs.go:123] Gathering logs for kube-scheduler [cd590f5f9108ec9765f17072f326f44cd3c8ab302521c32f26c090cdd981b6f6] ...
	I1007 13:51:47.411373  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd590f5f9108ec9765f17072f326f44cd3c8ab302521c32f26c090cdd981b6f6"
	I1007 13:51:47.458852  794355 logs.go:123] Gathering logs for kindnet [3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3] ...
	I1007 13:51:47.458886  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3"
	I1007 13:51:47.501483  794355 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:51:47.501520  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:51:47.698624  794355 logs.go:123] Gathering logs for kube-apiserver [43bdf561a58ff5fabe544dd85759243717c93e1c15c722fe3c0a318ca95f5b0d] ...
	I1007 13:51:47.698656  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43bdf561a58ff5fabe544dd85759243717c93e1c15c722fe3c0a318ca95f5b0d"
	I1007 13:51:47.755024  794355 logs.go:123] Gathering logs for etcd [707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389] ...
	I1007 13:51:47.755058  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389"
	I1007 13:51:47.813053  794355 logs.go:123] Gathering logs for kube-controller-manager [ab0b956db3468c6fd8893d641eb706709d5cd0eedcbe8aaee7fe806586346a40] ...
	I1007 13:51:47.813086  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab0b956db3468c6fd8893d641eb706709d5cd0eedcbe8aaee7fe806586346a40"
	I1007 13:51:47.886674  794355 logs.go:123] Gathering logs for kubernetes-dashboard [26f59d548b181b4d23dbd64124417687da7c0d30cffc04a0fddcbe9c4ad969cb] ...
	I1007 13:51:47.886708  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f59d548b181b4d23dbd64124417687da7c0d30cffc04a0fddcbe9c4ad969cb"
	I1007 13:51:47.930277  794355 logs.go:123] Gathering logs for storage-provisioner [f1587851443378e689dc16812fa6726ea857f90f4d96b15adc0b5e91ddbf6286] ...
	I1007 13:51:47.930308  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1587851443378e689dc16812fa6726ea857f90f4d96b15adc0b5e91ddbf6286"
	I1007 13:51:47.969751  794355 logs.go:123] Gathering logs for container status ...
	I1007 13:51:47.969782  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:51:48.017957  794355 logs.go:123] Gathering logs for kube-apiserver [bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93] ...
	I1007 13:51:48.017990  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93"
	I1007 13:51:48.073245  794355 logs.go:123] Gathering logs for etcd [33903c7a377f2b08b0fecf23a0c14df9bb3a9c9ec0f64bfa2c22ba01a2ff9d9b] ...
	I1007 13:51:48.073282  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33903c7a377f2b08b0fecf23a0c14df9bb3a9c9ec0f64bfa2c22ba01a2ff9d9b"
	I1007 13:51:48.124783  794355 logs.go:123] Gathering logs for coredns [dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f] ...
	I1007 13:51:48.124835  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f"
	I1007 13:51:48.168174  794355 logs.go:123] Gathering logs for kube-proxy [42bfb9b45ce705d6a68018440dc9e8cb4f11c7bce3abd38a8ea5acab2ef57a6e] ...
	I1007 13:51:48.168206  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42bfb9b45ce705d6a68018440dc9e8cb4f11c7bce3abd38a8ea5acab2ef57a6e"
	I1007 13:51:48.219356  794355 logs.go:123] Gathering logs for kube-proxy [b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623] ...
	I1007 13:51:48.219437  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623"
	I1007 13:51:48.270419  794355 logs.go:123] Gathering logs for kubelet ...
	I1007 13:51:48.270448  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:51:48.354348  794355 logs.go:123] Gathering logs for dmesg ...
	I1007 13:51:48.354387  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:51:48.375675  794355 logs.go:123] Gathering logs for kube-scheduler [c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf] ...
	I1007 13:51:48.375705  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf"
	I1007 13:51:48.427057  794355 logs.go:123] Gathering logs for kube-controller-manager [1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb] ...
	I1007 13:51:48.427093  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb"
	I1007 13:51:48.491075  794355 logs.go:123] Gathering logs for kindnet [8eaabe7d610941a8182facd3522da4f8fe7135414350c9246a55dd84e51dc457] ...
	I1007 13:51:48.491110  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8eaabe7d610941a8182facd3522da4f8fe7135414350c9246a55dd84e51dc457"
	I1007 13:51:48.537231  794355 logs.go:123] Gathering logs for storage-provisioner [6dd1df3ff1615192d2ec04bf57a163244d17bfb455a44288e257159641f69817] ...
	I1007 13:51:48.537263  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dd1df3ff1615192d2ec04bf57a163244d17bfb455a44288e257159641f69817"
	I1007 13:51:48.579305  794355 logs.go:123] Gathering logs for containerd ...
	I1007 13:51:48.579335  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1007 13:51:48.335702  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:50.829593  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:51.147163  794355 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:51:51.160665  794355 api_server.go:72] duration metric: took 4m19.789284883s to wait for apiserver process to appear ...
	I1007 13:51:51.160693  794355 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:51:51.160732  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:51:51.160793  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:51:51.202836  794355 cri.go:89] found id: "43bdf561a58ff5fabe544dd85759243717c93e1c15c722fe3c0a318ca95f5b0d"
	I1007 13:51:51.202861  794355 cri.go:89] found id: "bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93"
	I1007 13:51:51.202866  794355 cri.go:89] found id: ""
	I1007 13:51:51.202874  794355 logs.go:282] 2 containers: [43bdf561a58ff5fabe544dd85759243717c93e1c15c722fe3c0a318ca95f5b0d bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93]
	I1007 13:51:51.202937  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.206946  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.211150  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1007 13:51:51.211229  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:51:51.260451  794355 cri.go:89] found id: "33903c7a377f2b08b0fecf23a0c14df9bb3a9c9ec0f64bfa2c22ba01a2ff9d9b"
	I1007 13:51:51.260473  794355 cri.go:89] found id: "707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389"
	I1007 13:51:51.260478  794355 cri.go:89] found id: ""
	I1007 13:51:51.260486  794355 logs.go:282] 2 containers: [33903c7a377f2b08b0fecf23a0c14df9bb3a9c9ec0f64bfa2c22ba01a2ff9d9b 707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389]
	I1007 13:51:51.260551  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.264367  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.268019  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1007 13:51:51.268112  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:51:51.313956  794355 cri.go:89] found id: "9a5e89ef652822e1f49fb89e396dfb759057f81942cd30388e30068d53d89879"
	I1007 13:51:51.313981  794355 cri.go:89] found id: "dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f"
	I1007 13:51:51.313987  794355 cri.go:89] found id: ""
	I1007 13:51:51.313995  794355 logs.go:282] 2 containers: [9a5e89ef652822e1f49fb89e396dfb759057f81942cd30388e30068d53d89879 dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f]
	I1007 13:51:51.314055  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.320138  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.323892  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:51:51.323966  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:51:51.367224  794355 cri.go:89] found id: "cd590f5f9108ec9765f17072f326f44cd3c8ab302521c32f26c090cdd981b6f6"
	I1007 13:51:51.367248  794355 cri.go:89] found id: "c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf"
	I1007 13:51:51.367253  794355 cri.go:89] found id: ""
	I1007 13:51:51.367261  794355 logs.go:282] 2 containers: [cd590f5f9108ec9765f17072f326f44cd3c8ab302521c32f26c090cdd981b6f6 c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf]
	I1007 13:51:51.367318  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.371479  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.375596  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:51:51.375677  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:51:51.416509  794355 cri.go:89] found id: "42bfb9b45ce705d6a68018440dc9e8cb4f11c7bce3abd38a8ea5acab2ef57a6e"
	I1007 13:51:51.416571  794355 cri.go:89] found id: "b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623"
	I1007 13:51:51.416582  794355 cri.go:89] found id: ""
	I1007 13:51:51.416590  794355 logs.go:282] 2 containers: [42bfb9b45ce705d6a68018440dc9e8cb4f11c7bce3abd38a8ea5acab2ef57a6e b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623]
	I1007 13:51:51.416647  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.420368  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.424068  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:51:51.424147  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:51:51.471458  794355 cri.go:89] found id: "ab0b956db3468c6fd8893d641eb706709d5cd0eedcbe8aaee7fe806586346a40"
	I1007 13:51:51.471482  794355 cri.go:89] found id: "1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb"
	I1007 13:51:51.471487  794355 cri.go:89] found id: ""
	I1007 13:51:51.471495  794355 logs.go:282] 2 containers: [ab0b956db3468c6fd8893d641eb706709d5cd0eedcbe8aaee7fe806586346a40 1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb]
	I1007 13:51:51.471549  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.475473  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.478968  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1007 13:51:51.479042  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:51:51.523170  794355 cri.go:89] found id: "8eaabe7d610941a8182facd3522da4f8fe7135414350c9246a55dd84e51dc457"
	I1007 13:51:51.523234  794355 cri.go:89] found id: "3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3"
	I1007 13:51:51.523244  794355 cri.go:89] found id: ""
	I1007 13:51:51.523252  794355 logs.go:282] 2 containers: [8eaabe7d610941a8182facd3522da4f8fe7135414350c9246a55dd84e51dc457 3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3]
	I1007 13:51:51.523314  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.527219  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.531459  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1007 13:51:51.531539  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1007 13:51:51.569549  794355 cri.go:89] found id: "6dd1df3ff1615192d2ec04bf57a163244d17bfb455a44288e257159641f69817"
	I1007 13:51:51.569573  794355 cri.go:89] found id: "f1587851443378e689dc16812fa6726ea857f90f4d96b15adc0b5e91ddbf6286"
	I1007 13:51:51.569579  794355 cri.go:89] found id: ""
	I1007 13:51:51.569586  794355 logs.go:282] 2 containers: [6dd1df3ff1615192d2ec04bf57a163244d17bfb455a44288e257159641f69817 f1587851443378e689dc16812fa6726ea857f90f4d96b15adc0b5e91ddbf6286]
	I1007 13:51:51.569641  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.573537  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.577329  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:51:51.577403  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:51:51.626409  794355 cri.go:89] found id: "26f59d548b181b4d23dbd64124417687da7c0d30cffc04a0fddcbe9c4ad969cb"
	I1007 13:51:51.626434  794355 cri.go:89] found id: ""
	I1007 13:51:51.626443  794355 logs.go:282] 1 containers: [26f59d548b181b4d23dbd64124417687da7c0d30cffc04a0fddcbe9c4ad969cb]
	I1007 13:51:51.626565  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:51.630335  794355 logs.go:123] Gathering logs for kube-apiserver [bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93] ...
	I1007 13:51:51.630403  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93"
	I1007 13:51:51.698702  794355 logs.go:123] Gathering logs for etcd [33903c7a377f2b08b0fecf23a0c14df9bb3a9c9ec0f64bfa2c22ba01a2ff9d9b] ...
	I1007 13:51:51.698738  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33903c7a377f2b08b0fecf23a0c14df9bb3a9c9ec0f64bfa2c22ba01a2ff9d9b"
	I1007 13:51:51.749079  794355 logs.go:123] Gathering logs for storage-provisioner [f1587851443378e689dc16812fa6726ea857f90f4d96b15adc0b5e91ddbf6286] ...
	I1007 13:51:51.749111  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1587851443378e689dc16812fa6726ea857f90f4d96b15adc0b5e91ddbf6286"
	I1007 13:51:51.808959  794355 logs.go:123] Gathering logs for container status ...
	I1007 13:51:51.808990  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:51:51.880281  794355 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:51:51.880311  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:51:52.020771  794355 logs.go:123] Gathering logs for dmesg ...
	I1007 13:51:52.020805  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:51:52.040720  794355 logs.go:123] Gathering logs for etcd [707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389] ...
	I1007 13:51:52.040752  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389"
	I1007 13:51:52.099965  794355 logs.go:123] Gathering logs for coredns [9a5e89ef652822e1f49fb89e396dfb759057f81942cd30388e30068d53d89879] ...
	I1007 13:51:52.100000  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a5e89ef652822e1f49fb89e396dfb759057f81942cd30388e30068d53d89879"
	I1007 13:51:52.143348  794355 logs.go:123] Gathering logs for coredns [dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f] ...
	I1007 13:51:52.143378  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f"
	I1007 13:51:52.181648  794355 logs.go:123] Gathering logs for kube-scheduler [cd590f5f9108ec9765f17072f326f44cd3c8ab302521c32f26c090cdd981b6f6] ...
	I1007 13:51:52.181713  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd590f5f9108ec9765f17072f326f44cd3c8ab302521c32f26c090cdd981b6f6"
	I1007 13:51:52.243906  794355 logs.go:123] Gathering logs for kubelet ...
	I1007 13:51:52.243935  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:51:52.326079  794355 logs.go:123] Gathering logs for kindnet [3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3] ...
	I1007 13:51:52.326115  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3"
	I1007 13:51:52.371249  794355 logs.go:123] Gathering logs for containerd ...
	I1007 13:51:52.371277  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1007 13:51:52.434154  794355 logs.go:123] Gathering logs for kube-apiserver [43bdf561a58ff5fabe544dd85759243717c93e1c15c722fe3c0a318ca95f5b0d] ...
	I1007 13:51:52.434190  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43bdf561a58ff5fabe544dd85759243717c93e1c15c722fe3c0a318ca95f5b0d"
	I1007 13:51:52.490198  794355 logs.go:123] Gathering logs for kube-proxy [42bfb9b45ce705d6a68018440dc9e8cb4f11c7bce3abd38a8ea5acab2ef57a6e] ...
	I1007 13:51:52.490276  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42bfb9b45ce705d6a68018440dc9e8cb4f11c7bce3abd38a8ea5acab2ef57a6e"
	I1007 13:51:52.532926  794355 logs.go:123] Gathering logs for kube-proxy [b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623] ...
	I1007 13:51:52.533023  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623"
	I1007 13:51:52.574880  794355 logs.go:123] Gathering logs for kube-controller-manager [ab0b956db3468c6fd8893d641eb706709d5cd0eedcbe8aaee7fe806586346a40] ...
	I1007 13:51:52.574958  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab0b956db3468c6fd8893d641eb706709d5cd0eedcbe8aaee7fe806586346a40"
	I1007 13:51:52.642127  794355 logs.go:123] Gathering logs for kube-controller-manager [1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb] ...
	I1007 13:51:52.642162  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb"
	I1007 13:51:52.711878  794355 logs.go:123] Gathering logs for kindnet [8eaabe7d610941a8182facd3522da4f8fe7135414350c9246a55dd84e51dc457] ...
	I1007 13:51:52.711912  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8eaabe7d610941a8182facd3522da4f8fe7135414350c9246a55dd84e51dc457"
	I1007 13:51:52.754733  794355 logs.go:123] Gathering logs for storage-provisioner [6dd1df3ff1615192d2ec04bf57a163244d17bfb455a44288e257159641f69817] ...
	I1007 13:51:52.754766  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dd1df3ff1615192d2ec04bf57a163244d17bfb455a44288e257159641f69817"
	I1007 13:51:52.798642  794355 logs.go:123] Gathering logs for kubernetes-dashboard [26f59d548b181b4d23dbd64124417687da7c0d30cffc04a0fddcbe9c4ad969cb] ...
	I1007 13:51:52.798670  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f59d548b181b4d23dbd64124417687da7c0d30cffc04a0fddcbe9c4ad969cb"
	I1007 13:51:52.854405  794355 logs.go:123] Gathering logs for kube-scheduler [c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf] ...
	I1007 13:51:52.854437  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf"
	I1007 13:51:52.831395  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:55.330997  788969 pod_ready.go:103] pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace has status "Ready":"False"
	I1007 13:51:56.334279  788969 pod_ready.go:82] duration metric: took 4m0.010877247s for pod "metrics-server-9975d5f86-b7ct2" in "kube-system" namespace to be "Ready" ...
	E1007 13:51:56.334361  788969 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1007 13:51:56.334374  788969 pod_ready.go:39] duration metric: took 5m29.754093121s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1007 13:51:56.334389  788969 api_server.go:52] waiting for apiserver process to appear ...
	I1007 13:51:56.334457  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:51:56.334579  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:51:56.412777  788969 cri.go:89] found id: "6438e2a98b44ed7687544147d7fde2facece2b6119ba86ffa996a5b8e7019da7"
	I1007 13:51:56.412797  788969 cri.go:89] found id: "dffbe9e7eda4a16ea00f685c861a1d7506d88d6ae76dc1fbd3b528f0186bf960"
	I1007 13:51:56.412802  788969 cri.go:89] found id: ""
	I1007 13:51:56.412810  788969 logs.go:282] 2 containers: [6438e2a98b44ed7687544147d7fde2facece2b6119ba86ffa996a5b8e7019da7 dffbe9e7eda4a16ea00f685c861a1d7506d88d6ae76dc1fbd3b528f0186bf960]
	I1007 13:51:56.412866  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.422463  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.429934  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1007 13:51:56.430014  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:51:56.500290  788969 cri.go:89] found id: "538e7c613d2fdfcf8bdf655918adbdaa8c80e7d22ee153d71d980bad173f6cd1"
	I1007 13:51:56.500315  788969 cri.go:89] found id: "3f8d8d7911069dffad8bf7ce9156d34436c105ed532c7439e0b6bda21c43e87c"
	I1007 13:51:56.500320  788969 cri.go:89] found id: ""
	I1007 13:51:56.500327  788969 logs.go:282] 2 containers: [538e7c613d2fdfcf8bdf655918adbdaa8c80e7d22ee153d71d980bad173f6cd1 3f8d8d7911069dffad8bf7ce9156d34436c105ed532c7439e0b6bda21c43e87c]
	I1007 13:51:56.500384  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.505493  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.511697  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1007 13:51:56.511775  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:51:56.568360  788969 cri.go:89] found id: "b970f66a7b788465fb1b5efff7470a2a13241205a4b3871615987cd5e8185c0b"
	I1007 13:51:56.568387  788969 cri.go:89] found id: "fefa6581f4e4cb7fefe7289a78cf684582ea646cd5283484696218c2863765ed"
	I1007 13:51:56.568392  788969 cri.go:89] found id: ""
	I1007 13:51:56.568399  788969 logs.go:282] 2 containers: [b970f66a7b788465fb1b5efff7470a2a13241205a4b3871615987cd5e8185c0b fefa6581f4e4cb7fefe7289a78cf684582ea646cd5283484696218c2863765ed]
	I1007 13:51:56.568468  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.572933  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.577480  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:51:56.577558  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:51:56.637806  788969 cri.go:89] found id: "1e75bdb1fcc164ca7ea09aaa49994e75347b13bbf4549844b1471555f94af297"
	I1007 13:51:56.637834  788969 cri.go:89] found id: "3a2ed108143653900fc42a927e9971546f5f58f58c845162e9ad03a74ef4c19f"
	I1007 13:51:56.637838  788969 cri.go:89] found id: ""
	I1007 13:51:56.637846  788969 logs.go:282] 2 containers: [1e75bdb1fcc164ca7ea09aaa49994e75347b13bbf4549844b1471555f94af297 3a2ed108143653900fc42a927e9971546f5f58f58c845162e9ad03a74ef4c19f]
	I1007 13:51:56.637918  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.643411  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.647884  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:51:56.647957  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:51:56.750655  788969 cri.go:89] found id: "b47f6084fbcd06d9de3d640a50b2eedabdcfa0e9e99795313dbbf409ba0b34ba"
	I1007 13:51:56.750682  788969 cri.go:89] found id: "90271f39ba89eca0f9f411179e611fffa8cb7092df3cd7385b2489d67eb7a32d"
	I1007 13:51:56.750687  788969 cri.go:89] found id: ""
	I1007 13:51:56.750694  788969 logs.go:282] 2 containers: [b47f6084fbcd06d9de3d640a50b2eedabdcfa0e9e99795313dbbf409ba0b34ba 90271f39ba89eca0f9f411179e611fffa8cb7092df3cd7385b2489d67eb7a32d]
	I1007 13:51:56.750755  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.761341  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.767207  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:51:56.767287  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:51:56.854493  788969 cri.go:89] found id: "a6e224cfa70000e35a720f8c9ed9661a70147bc5dc65460934496ec9b288fe06"
	I1007 13:51:56.854525  788969 cri.go:89] found id: "6ee84abf1b4464314c9cb9e84d60de9b2b00461bb97fe11808df52c7e0f87771"
	I1007 13:51:56.854531  788969 cri.go:89] found id: ""
	I1007 13:51:56.854538  788969 logs.go:282] 2 containers: [a6e224cfa70000e35a720f8c9ed9661a70147bc5dc65460934496ec9b288fe06 6ee84abf1b4464314c9cb9e84d60de9b2b00461bb97fe11808df52c7e0f87771]
	I1007 13:51:56.854606  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.859331  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.863262  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1007 13:51:56.863350  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:51:56.940414  788969 cri.go:89] found id: "c44e873fb63063161eea0a9a33fcb424a1fca04659773868480c9079f14fcde3"
	I1007 13:51:56.940437  788969 cri.go:89] found id: "54c2e6b5d938cc814a93018f032c376c76d65bc1872f0fa55dd63fe950ff317f"
	I1007 13:51:56.940442  788969 cri.go:89] found id: ""
	I1007 13:51:56.940450  788969 logs.go:282] 2 containers: [c44e873fb63063161eea0a9a33fcb424a1fca04659773868480c9079f14fcde3 54c2e6b5d938cc814a93018f032c376c76d65bc1872f0fa55dd63fe950ff317f]
	I1007 13:51:56.940513  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.946919  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:56.951803  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:51:56.951885  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:51:57.012766  788969 cri.go:89] found id: "02588bae23d099926c85f07f669f2281dd82887c6cda051d44c64293d25ce608"
	I1007 13:51:57.012792  788969 cri.go:89] found id: ""
	I1007 13:51:57.012800  788969 logs.go:282] 1 containers: [02588bae23d099926c85f07f669f2281dd82887c6cda051d44c64293d25ce608]
	I1007 13:51:57.012863  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:57.017491  788969 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1007 13:51:57.017580  788969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1007 13:51:57.075862  788969 cri.go:89] found id: "94f42c6c49fd89fb1387486b3bfb41e7d9ad24f923a9cbfb3757cab9ba0d589c"
	I1007 13:51:57.075894  788969 cri.go:89] found id: "3e11e0da9f75ad4dd8fcceb6b095c49ccfce3438e6b64b5adf91722fb701d656"
	I1007 13:51:57.075899  788969 cri.go:89] found id: ""
	I1007 13:51:57.075906  788969 logs.go:282] 2 containers: [94f42c6c49fd89fb1387486b3bfb41e7d9ad24f923a9cbfb3757cab9ba0d589c 3e11e0da9f75ad4dd8fcceb6b095c49ccfce3438e6b64b5adf91722fb701d656]
	I1007 13:51:57.075967  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:57.081400  788969 ssh_runner.go:195] Run: which crictl
	I1007 13:51:57.085944  788969 logs.go:123] Gathering logs for containerd ...
	I1007 13:51:57.085968  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1007 13:51:57.162686  788969 logs.go:123] Gathering logs for dmesg ...
	I1007 13:51:57.162733  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:51:57.179955  788969 logs.go:123] Gathering logs for kube-controller-manager [6ee84abf1b4464314c9cb9e84d60de9b2b00461bb97fe11808df52c7e0f87771] ...
	I1007 13:51:57.179991  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6ee84abf1b4464314c9cb9e84d60de9b2b00461bb97fe11808df52c7e0f87771"
	I1007 13:51:57.306063  788969 logs.go:123] Gathering logs for etcd [3f8d8d7911069dffad8bf7ce9156d34436c105ed532c7439e0b6bda21c43e87c] ...
	I1007 13:51:57.306099  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3f8d8d7911069dffad8bf7ce9156d34436c105ed532c7439e0b6bda21c43e87c"
	I1007 13:51:57.357347  788969 logs.go:123] Gathering logs for coredns [fefa6581f4e4cb7fefe7289a78cf684582ea646cd5283484696218c2863765ed] ...
	I1007 13:51:57.357378  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fefa6581f4e4cb7fefe7289a78cf684582ea646cd5283484696218c2863765ed"
	I1007 13:51:57.419970  788969 logs.go:123] Gathering logs for kube-controller-manager [a6e224cfa70000e35a720f8c9ed9661a70147bc5dc65460934496ec9b288fe06] ...
	I1007 13:51:57.420003  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6e224cfa70000e35a720f8c9ed9661a70147bc5dc65460934496ec9b288fe06"
	I1007 13:51:57.497427  788969 logs.go:123] Gathering logs for kubelet ...
	I1007 13:51:57.497465  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1007 13:51:57.558680  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:26 old-k8s-version-716021 kubelet[666]: E1007 13:46:26.808783     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-716021" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-716021' and this object
	W1007 13:51:57.559023  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:26 old-k8s-version-716021 kubelet[666]: E1007 13:46:26.854809     666 reflector.go:138] object-"kube-system"/"kube-proxy-token-85z2f": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-85z2f" is forbidden: User "system:node:old-k8s-version-716021" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-716021' and this object
	W1007 13:51:57.562387  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:26 old-k8s-version-716021 kubelet[666]: E1007 13:46:26.856421     666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-716021" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-716021' and this object
	W1007 13:51:57.562607  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:26 old-k8s-version-716021 kubelet[666]: E1007 13:46:26.856819     666 reflector.go:138] object-"kube-system"/"coredns-token-kp44b": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-kp44b" is forbidden: User "system:node:old-k8s-version-716021" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-716021' and this object
	W1007 13:51:57.567093  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:28 old-k8s-version-716021 kubelet[666]: E1007 13:46:28.915071     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1007 13:51:57.567283  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:29 old-k8s-version-716021 kubelet[666]: E1007 13:46:29.091904     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.570040  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:43 old-k8s-version-716021 kubelet[666]: E1007 13:46:43.213529     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1007 13:51:57.571971  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:51 old-k8s-version-716021 kubelet[666]: E1007 13:46:51.378418     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.572429  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:52 old-k8s-version-716021 kubelet[666]: E1007 13:46:52.384192     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.572755  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:53 old-k8s-version-716021 kubelet[666]: E1007 13:46:53.386690     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.573266  788969 logs.go:138] Found kubelet problem: Oct 07 13:46:57 old-k8s-version-716021 kubelet[666]: E1007 13:46:57.207180     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.573720  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:01 old-k8s-version-716021 kubelet[666]: E1007 13:47:01.413751     666 pod_workers.go:191] Error syncing pod 4cfcc06f-69c5-42d6-bb20-67a3d942cfb0 ("storage-provisioner_kube-system(4cfcc06f-69c5-42d6-bb20-67a3d942cfb0)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(4cfcc06f-69c5-42d6-bb20-67a3d942cfb0)"
	W1007 13:51:57.574645  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:05 old-k8s-version-716021 kubelet[666]: E1007 13:47:05.430539     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.577065  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:09 old-k8s-version-716021 kubelet[666]: E1007 13:47:09.218614     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1007 13:51:57.577392  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:12 old-k8s-version-716021 kubelet[666]: E1007 13:47:12.186154     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.577711  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:23 old-k8s-version-716021 kubelet[666]: E1007 13:47:23.206886     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.578040  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:24 old-k8s-version-716021 kubelet[666]: E1007 13:47:24.209098     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.578224  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:34 old-k8s-version-716021 kubelet[666]: E1007 13:47:34.211370     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.578812  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:40 old-k8s-version-716021 kubelet[666]: E1007 13:47:40.564455     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.579137  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:42 old-k8s-version-716021 kubelet[666]: E1007 13:47:42.186663     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.579319  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:47 old-k8s-version-716021 kubelet[666]: E1007 13:47:47.209688     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.579644  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:54 old-k8s-version-716021 kubelet[666]: E1007 13:47:54.207353     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.582062  788969 logs.go:138] Found kubelet problem: Oct 07 13:47:59 old-k8s-version-716021 kubelet[666]: E1007 13:47:59.222600     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1007 13:51:57.582424  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:07 old-k8s-version-716021 kubelet[666]: E1007 13:48:07.206479     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.582648  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:12 old-k8s-version-716021 kubelet[666]: E1007 13:48:12.222632     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.583311  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:20 old-k8s-version-716021 kubelet[666]: E1007 13:48:20.682098     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.583698  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:22 old-k8s-version-716021 kubelet[666]: E1007 13:48:22.186594     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.583912  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:23 old-k8s-version-716021 kubelet[666]: E1007 13:48:23.206902     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.584114  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:35 old-k8s-version-716021 kubelet[666]: E1007 13:48:35.206832     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.584442  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:36 old-k8s-version-716021 kubelet[666]: E1007 13:48:36.206609     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.584626  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:46 old-k8s-version-716021 kubelet[666]: E1007 13:48:46.206819     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.584987  788969 logs.go:138] Found kubelet problem: Oct 07 13:48:49 old-k8s-version-716021 kubelet[666]: E1007 13:48:49.206451     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.585172  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:01 old-k8s-version-716021 kubelet[666]: E1007 13:49:01.206862     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.585505  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:04 old-k8s-version-716021 kubelet[666]: E1007 13:49:04.210397     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.585696  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:16 old-k8s-version-716021 kubelet[666]: E1007 13:49:16.206769     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.586043  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:19 old-k8s-version-716021 kubelet[666]: E1007 13:49:19.206428     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.588462  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:30 old-k8s-version-716021 kubelet[666]: E1007 13:49:30.231019     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1007 13:51:57.588788  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:31 old-k8s-version-716021 kubelet[666]: E1007 13:49:31.206480     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.588970  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:45 old-k8s-version-716021 kubelet[666]: E1007 13:49:45.207213     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.589558  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:46 old-k8s-version-716021 kubelet[666]: E1007 13:49:46.907884     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.589896  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:52 old-k8s-version-716021 kubelet[666]: E1007 13:49:52.187043     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.590078  788969 logs.go:138] Found kubelet problem: Oct 07 13:49:57 old-k8s-version-716021 kubelet[666]: E1007 13:49:57.214713     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.590404  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:05 old-k8s-version-716021 kubelet[666]: E1007 13:50:05.206555     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.590593  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:08 old-k8s-version-716021 kubelet[666]: E1007 13:50:08.211199     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.590918  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:17 old-k8s-version-716021 kubelet[666]: E1007 13:50:17.206916     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.591102  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:19 old-k8s-version-716021 kubelet[666]: E1007 13:50:19.206748     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.591430  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:28 old-k8s-version-716021 kubelet[666]: E1007 13:50:28.207831     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.591612  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:30 old-k8s-version-716021 kubelet[666]: E1007 13:50:30.211659     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.591939  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:40 old-k8s-version-716021 kubelet[666]: E1007 13:50:40.210379     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.592120  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:45 old-k8s-version-716021 kubelet[666]: E1007 13:50:45.207104     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.592445  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:53 old-k8s-version-716021 kubelet[666]: E1007 13:50:53.206980     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.592626  788969 logs.go:138] Found kubelet problem: Oct 07 13:50:57 old-k8s-version-716021 kubelet[666]: E1007 13:50:57.206804     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.592951  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:08 old-k8s-version-716021 kubelet[666]: E1007 13:51:08.206532     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.593132  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:08 old-k8s-version-716021 kubelet[666]: E1007 13:51:08.207812     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.593313  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:22 old-k8s-version-716021 kubelet[666]: E1007 13:51:22.206879     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.593637  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:22 old-k8s-version-716021 kubelet[666]: E1007 13:51:22.208036     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.593845  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:33 old-k8s-version-716021 kubelet[666]: E1007 13:51:33.206925     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.594174  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:34 old-k8s-version-716021 kubelet[666]: E1007 13:51:34.206498     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:57.594357  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:47 old-k8s-version-716021 kubelet[666]: E1007 13:51:47.231399     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:57.594690  788969 logs.go:138] Found kubelet problem: Oct 07 13:51:48 old-k8s-version-716021 kubelet[666]: E1007 13:51:48.206953     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	I1007 13:51:57.594701  788969 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:51:57.594714  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:51:55.412158  794355 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1007 13:51:55.419819  794355 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1007 13:51:55.420873  794355 api_server.go:141] control plane version: v1.31.1
	I1007 13:51:55.420898  794355 api_server.go:131] duration metric: took 4.26019759s to wait for apiserver health ...
	I1007 13:51:55.420908  794355 system_pods.go:43] waiting for kube-system pods to appear ...
	I1007 13:51:55.420931  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1007 13:51:55.420997  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1007 13:51:55.474665  794355 cri.go:89] found id: "43bdf561a58ff5fabe544dd85759243717c93e1c15c722fe3c0a318ca95f5b0d"
	I1007 13:51:55.474690  794355 cri.go:89] found id: "bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93"
	I1007 13:51:55.474696  794355 cri.go:89] found id: ""
	I1007 13:51:55.474704  794355 logs.go:282] 2 containers: [43bdf561a58ff5fabe544dd85759243717c93e1c15c722fe3c0a318ca95f5b0d bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93]
	I1007 13:51:55.474761  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.478618  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.482016  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1007 13:51:55.482090  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1007 13:51:55.521042  794355 cri.go:89] found id: "33903c7a377f2b08b0fecf23a0c14df9bb3a9c9ec0f64bfa2c22ba01a2ff9d9b"
	I1007 13:51:55.521110  794355 cri.go:89] found id: "707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389"
	I1007 13:51:55.521130  794355 cri.go:89] found id: ""
	I1007 13:51:55.521154  794355 logs.go:282] 2 containers: [33903c7a377f2b08b0fecf23a0c14df9bb3a9c9ec0f64bfa2c22ba01a2ff9d9b 707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389]
	I1007 13:51:55.521245  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.525026  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.529230  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1007 13:51:55.529304  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1007 13:51:55.571323  794355 cri.go:89] found id: "9a5e89ef652822e1f49fb89e396dfb759057f81942cd30388e30068d53d89879"
	I1007 13:51:55.571348  794355 cri.go:89] found id: "dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f"
	I1007 13:51:55.571353  794355 cri.go:89] found id: ""
	I1007 13:51:55.571361  794355 logs.go:282] 2 containers: [9a5e89ef652822e1f49fb89e396dfb759057f81942cd30388e30068d53d89879 dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f]
	I1007 13:51:55.571421  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.575650  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.579289  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1007 13:51:55.579363  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1007 13:51:55.628166  794355 cri.go:89] found id: "cd590f5f9108ec9765f17072f326f44cd3c8ab302521c32f26c090cdd981b6f6"
	I1007 13:51:55.628193  794355 cri.go:89] found id: "c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf"
	I1007 13:51:55.628199  794355 cri.go:89] found id: ""
	I1007 13:51:55.628206  794355 logs.go:282] 2 containers: [cd590f5f9108ec9765f17072f326f44cd3c8ab302521c32f26c090cdd981b6f6 c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf]
	I1007 13:51:55.628262  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.632201  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.636006  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1007 13:51:55.636086  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1007 13:51:55.682927  794355 cri.go:89] found id: "42bfb9b45ce705d6a68018440dc9e8cb4f11c7bce3abd38a8ea5acab2ef57a6e"
	I1007 13:51:55.682949  794355 cri.go:89] found id: "b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623"
	I1007 13:51:55.682953  794355 cri.go:89] found id: ""
	I1007 13:51:55.682960  794355 logs.go:282] 2 containers: [42bfb9b45ce705d6a68018440dc9e8cb4f11c7bce3abd38a8ea5acab2ef57a6e b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623]
	I1007 13:51:55.683017  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.686777  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.690282  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1007 13:51:55.690379  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1007 13:51:55.729466  794355 cri.go:89] found id: "ab0b956db3468c6fd8893d641eb706709d5cd0eedcbe8aaee7fe806586346a40"
	I1007 13:51:55.729493  794355 cri.go:89] found id: "1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb"
	I1007 13:51:55.729506  794355 cri.go:89] found id: ""
	I1007 13:51:55.729513  794355 logs.go:282] 2 containers: [ab0b956db3468c6fd8893d641eb706709d5cd0eedcbe8aaee7fe806586346a40 1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb]
	I1007 13:51:55.729596  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.734118  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.738775  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1007 13:51:55.738883  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1007 13:51:55.778460  794355 cri.go:89] found id: "8eaabe7d610941a8182facd3522da4f8fe7135414350c9246a55dd84e51dc457"
	I1007 13:51:55.778484  794355 cri.go:89] found id: "3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3"
	I1007 13:51:55.778490  794355 cri.go:89] found id: ""
	I1007 13:51:55.778498  794355 logs.go:282] 2 containers: [8eaabe7d610941a8182facd3522da4f8fe7135414350c9246a55dd84e51dc457 3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3]
	I1007 13:51:55.778603  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.782515  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.786145  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1007 13:51:55.786221  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1007 13:51:55.836777  794355 cri.go:89] found id: "6dd1df3ff1615192d2ec04bf57a163244d17bfb455a44288e257159641f69817"
	I1007 13:51:55.836813  794355 cri.go:89] found id: "f1587851443378e689dc16812fa6726ea857f90f4d96b15adc0b5e91ddbf6286"
	I1007 13:51:55.836819  794355 cri.go:89] found id: ""
	I1007 13:51:55.836853  794355 logs.go:282] 2 containers: [6dd1df3ff1615192d2ec04bf57a163244d17bfb455a44288e257159641f69817 f1587851443378e689dc16812fa6726ea857f90f4d96b15adc0b5e91ddbf6286]
	I1007 13:51:55.836927  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.841192  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.845144  794355 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1007 13:51:55.845219  794355 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1007 13:51:55.885486  794355 cri.go:89] found id: "26f59d548b181b4d23dbd64124417687da7c0d30cffc04a0fddcbe9c4ad969cb"
	I1007 13:51:55.885556  794355 cri.go:89] found id: ""
	I1007 13:51:55.885579  794355 logs.go:282] 1 containers: [26f59d548b181b4d23dbd64124417687da7c0d30cffc04a0fddcbe9c4ad969cb]
	I1007 13:51:55.885717  794355 ssh_runner.go:195] Run: which crictl
	I1007 13:51:55.889559  794355 logs.go:123] Gathering logs for kubernetes-dashboard [26f59d548b181b4d23dbd64124417687da7c0d30cffc04a0fddcbe9c4ad969cb] ...
	I1007 13:51:55.889634  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 26f59d548b181b4d23dbd64124417687da7c0d30cffc04a0fddcbe9c4ad969cb"
	I1007 13:51:55.936361  794355 logs.go:123] Gathering logs for containerd ...
	I1007 13:51:55.936392  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1007 13:51:56.000048  794355 logs.go:123] Gathering logs for container status ...
	I1007 13:51:56.000087  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:51:56.074551  794355 logs.go:123] Gathering logs for kube-apiserver [43bdf561a58ff5fabe544dd85759243717c93e1c15c722fe3c0a318ca95f5b0d] ...
	I1007 13:51:56.074589  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43bdf561a58ff5fabe544dd85759243717c93e1c15c722fe3c0a318ca95f5b0d"
	I1007 13:51:56.130585  794355 logs.go:123] Gathering logs for etcd [33903c7a377f2b08b0fecf23a0c14df9bb3a9c9ec0f64bfa2c22ba01a2ff9d9b] ...
	I1007 13:51:56.130621  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 33903c7a377f2b08b0fecf23a0c14df9bb3a9c9ec0f64bfa2c22ba01a2ff9d9b"
	I1007 13:51:56.195750  794355 logs.go:123] Gathering logs for kube-proxy [b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623] ...
	I1007 13:51:56.195786  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b935711aea707ce61d36b2d0561c37446a26ba3ecdd12f08138b34facf4ba623"
	I1007 13:51:56.254965  794355 logs.go:123] Gathering logs for kindnet [8eaabe7d610941a8182facd3522da4f8fe7135414350c9246a55dd84e51dc457] ...
	I1007 13:51:56.254992  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8eaabe7d610941a8182facd3522da4f8fe7135414350c9246a55dd84e51dc457"
	I1007 13:51:56.297322  794355 logs.go:123] Gathering logs for kindnet [3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3] ...
	I1007 13:51:56.297355  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3194d5b907e6b8bf8971fb018f25759d55c877260e76b0a7fbda0a3e112940e3"
	I1007 13:51:56.365756  794355 logs.go:123] Gathering logs for kube-controller-manager [ab0b956db3468c6fd8893d641eb706709d5cd0eedcbe8aaee7fe806586346a40] ...
	I1007 13:51:56.365839  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ab0b956db3468c6fd8893d641eb706709d5cd0eedcbe8aaee7fe806586346a40"
	I1007 13:51:56.455989  794355 logs.go:123] Gathering logs for storage-provisioner [f1587851443378e689dc16812fa6726ea857f90f4d96b15adc0b5e91ddbf6286] ...
	I1007 13:51:56.456164  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f1587851443378e689dc16812fa6726ea857f90f4d96b15adc0b5e91ddbf6286"
	I1007 13:51:56.509343  794355 logs.go:123] Gathering logs for kubelet ...
	I1007 13:51:56.509370  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1007 13:51:56.596907  794355 logs.go:123] Gathering logs for dmesg ...
	I1007 13:51:56.596991  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1007 13:51:56.615518  794355 logs.go:123] Gathering logs for coredns [dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f] ...
	I1007 13:51:56.615546  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbf736aab37e3211987b3ec8c6931a437a26edc6d5c4bbe04c1d38cd34c8f03f"
	I1007 13:51:56.681777  794355 logs.go:123] Gathering logs for kube-scheduler [c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf] ...
	I1007 13:51:56.681806  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c574ff4369a62adef6d1dfe3942339a4d53449042cdeca6c760f18198a9cbfbf"
	I1007 13:51:56.787475  794355 logs.go:123] Gathering logs for kube-proxy [42bfb9b45ce705d6a68018440dc9e8cb4f11c7bce3abd38a8ea5acab2ef57a6e] ...
	I1007 13:51:56.787629  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 42bfb9b45ce705d6a68018440dc9e8cb4f11c7bce3abd38a8ea5acab2ef57a6e"
	I1007 13:51:56.878780  794355 logs.go:123] Gathering logs for kube-controller-manager [1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb] ...
	I1007 13:51:56.878885  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c834176710ac261d54557344fafb1339bee02e3f69697a088561211726a9feb"
	I1007 13:51:56.980343  794355 logs.go:123] Gathering logs for storage-provisioner [6dd1df3ff1615192d2ec04bf57a163244d17bfb455a44288e257159641f69817] ...
	I1007 13:51:56.980461  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6dd1df3ff1615192d2ec04bf57a163244d17bfb455a44288e257159641f69817"
	I1007 13:51:57.036489  794355 logs.go:123] Gathering logs for describe nodes ...
	I1007 13:51:57.036517  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1007 13:51:57.207379  794355 logs.go:123] Gathering logs for kube-apiserver [bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93] ...
	I1007 13:51:57.207414  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bc043677512aaec562207d47602450f079bea404a98d863a7a9aebd601821d93"
	I1007 13:51:57.281825  794355 logs.go:123] Gathering logs for etcd [707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389] ...
	I1007 13:51:57.281859  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 707a20dc9d7501895dfe567a7a779ca40e8fd166c56d5fede07fc9ca7ff99389"
	I1007 13:51:57.382058  794355 logs.go:123] Gathering logs for coredns [9a5e89ef652822e1f49fb89e396dfb759057f81942cd30388e30068d53d89879] ...
	I1007 13:51:57.382104  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9a5e89ef652822e1f49fb89e396dfb759057f81942cd30388e30068d53d89879"
	I1007 13:51:57.464662  794355 logs.go:123] Gathering logs for kube-scheduler [cd590f5f9108ec9765f17072f326f44cd3c8ab302521c32f26c090cdd981b6f6] ...
	I1007 13:51:57.464696  794355 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cd590f5f9108ec9765f17072f326f44cd3c8ab302521c32f26c090cdd981b6f6"
	I1007 13:52:00.045811  794355 system_pods.go:59] 9 kube-system pods found
	I1007 13:52:00.045877  794355 system_pods.go:61] "coredns-7c65d6cfc9-zc8p4" [e4bacbd6-70bf-47ea-b1e3-6618add65b1e] Running
	I1007 13:52:00.045886  794355 system_pods.go:61] "etcd-no-preload-178678" [f00c456c-ffcf-45dc-8454-c47b26699dc9] Running
	I1007 13:52:00.045891  794355 system_pods.go:61] "kindnet-ch2lw" [f1611838-bc60-417c-b7bb-30af2b5ba670] Running
	I1007 13:52:00.045897  794355 system_pods.go:61] "kube-apiserver-no-preload-178678" [3ed070dd-a821-417b-84e6-8b932a9e052c] Running
	I1007 13:52:00.045902  794355 system_pods.go:61] "kube-controller-manager-no-preload-178678" [e408fc97-6fd6-4dd3-a303-d926cb3c853a] Running
	I1007 13:52:00.045907  794355 system_pods.go:61] "kube-proxy-46xb9" [f66cede8-762c-4704-a591-cff23ceab2d8] Running
	I1007 13:52:00.045911  794355 system_pods.go:61] "kube-scheduler-no-preload-178678" [94923977-0f6a-40d4-a2ef-fc673171c650] Running
	I1007 13:52:00.045919  794355 system_pods.go:61] "metrics-server-6867b74b74-86hgb" [eb62f0a4-9997-4d81-9c84-bd15b1cf8490] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:52:00.045926  794355 system_pods.go:61] "storage-provisioner" [7b866838-0573-40ec-8ff2-1ce324bde301] Running
	I1007 13:52:00.045934  794355 system_pods.go:74] duration metric: took 4.625019474s to wait for pod list to return data ...
	I1007 13:52:00.045951  794355 default_sa.go:34] waiting for default service account to be created ...
	I1007 13:52:00.071456  794355 default_sa.go:45] found service account: "default"
	I1007 13:52:00.071484  794355 default_sa.go:55] duration metric: took 25.52475ms for default service account to be created ...
	I1007 13:52:00.071495  794355 system_pods.go:116] waiting for k8s-apps to be running ...
	I1007 13:52:00.094822  794355 system_pods.go:86] 9 kube-system pods found
	I1007 13:52:00.094931  794355 system_pods.go:89] "coredns-7c65d6cfc9-zc8p4" [e4bacbd6-70bf-47ea-b1e3-6618add65b1e] Running
	I1007 13:52:00.094958  794355 system_pods.go:89] "etcd-no-preload-178678" [f00c456c-ffcf-45dc-8454-c47b26699dc9] Running
	I1007 13:52:00.095008  794355 system_pods.go:89] "kindnet-ch2lw" [f1611838-bc60-417c-b7bb-30af2b5ba670] Running
	I1007 13:52:00.095038  794355 system_pods.go:89] "kube-apiserver-no-preload-178678" [3ed070dd-a821-417b-84e6-8b932a9e052c] Running
	I1007 13:52:00.095071  794355 system_pods.go:89] "kube-controller-manager-no-preload-178678" [e408fc97-6fd6-4dd3-a303-d926cb3c853a] Running
	I1007 13:52:00.095123  794355 system_pods.go:89] "kube-proxy-46xb9" [f66cede8-762c-4704-a591-cff23ceab2d8] Running
	I1007 13:52:00.095157  794355 system_pods.go:89] "kube-scheduler-no-preload-178678" [94923977-0f6a-40d4-a2ef-fc673171c650] Running
	I1007 13:52:00.095187  794355 system_pods.go:89] "metrics-server-6867b74b74-86hgb" [eb62f0a4-9997-4d81-9c84-bd15b1cf8490] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1007 13:52:00.095211  794355 system_pods.go:89] "storage-provisioner" [7b866838-0573-40ec-8ff2-1ce324bde301] Running
	I1007 13:52:00.095252  794355 system_pods.go:126] duration metric: took 23.749368ms to wait for k8s-apps to be running ...
	I1007 13:52:00.095278  794355 system_svc.go:44] waiting for kubelet service to be running ....
	I1007 13:52:00.095381  794355 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:52:00.178431  794355 system_svc.go:56] duration metric: took 83.141665ms WaitForService to wait for kubelet
	I1007 13:52:00.178464  794355 kubeadm.go:582] duration metric: took 4m28.807087418s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1007 13:52:00.178488  794355 node_conditions.go:102] verifying NodePressure condition ...
	I1007 13:52:00.188980  794355 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1007 13:52:00.189014  794355 node_conditions.go:123] node cpu capacity is 2
	I1007 13:52:00.189027  794355 node_conditions.go:105] duration metric: took 10.534024ms to run NodePressure ...
	I1007 13:52:00.189042  794355 start.go:241] waiting for startup goroutines ...
	I1007 13:52:00.189049  794355 start.go:246] waiting for cluster config update ...
	I1007 13:52:00.189060  794355 start.go:255] writing updated cluster config ...
	I1007 13:52:00.189412  794355 ssh_runner.go:195] Run: rm -f paused
	I1007 13:52:00.407758  794355 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1007 13:52:00.411379  794355 out.go:177] * Done! kubectl is now configured to use "no-preload-178678" cluster and "default" namespace by default
	I1007 13:51:57.744792  788969 logs.go:123] Gathering logs for etcd [538e7c613d2fdfcf8bdf655918adbdaa8c80e7d22ee153d71d980bad173f6cd1] ...
	I1007 13:51:57.744823  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 538e7c613d2fdfcf8bdf655918adbdaa8c80e7d22ee153d71d980bad173f6cd1"
	I1007 13:51:57.789489  788969 logs.go:123] Gathering logs for coredns [b970f66a7b788465fb1b5efff7470a2a13241205a4b3871615987cd5e8185c0b] ...
	I1007 13:51:57.789522  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b970f66a7b788465fb1b5efff7470a2a13241205a4b3871615987cd5e8185c0b"
	I1007 13:51:57.841342  788969 logs.go:123] Gathering logs for kube-scheduler [3a2ed108143653900fc42a927e9971546f5f58f58c845162e9ad03a74ef4c19f] ...
	I1007 13:51:57.841376  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a2ed108143653900fc42a927e9971546f5f58f58c845162e9ad03a74ef4c19f"
	I1007 13:51:57.885042  788969 logs.go:123] Gathering logs for kube-proxy [b47f6084fbcd06d9de3d640a50b2eedabdcfa0e9e99795313dbbf409ba0b34ba] ...
	I1007 13:51:57.885079  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b47f6084fbcd06d9de3d640a50b2eedabdcfa0e9e99795313dbbf409ba0b34ba"
	I1007 13:51:57.928690  788969 logs.go:123] Gathering logs for kube-proxy [90271f39ba89eca0f9f411179e611fffa8cb7092df3cd7385b2489d67eb7a32d] ...
	I1007 13:51:57.928720  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 90271f39ba89eca0f9f411179e611fffa8cb7092df3cd7385b2489d67eb7a32d"
	I1007 13:51:57.966663  788969 logs.go:123] Gathering logs for kindnet [c44e873fb63063161eea0a9a33fcb424a1fca04659773868480c9079f14fcde3] ...
	I1007 13:51:57.966697  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c44e873fb63063161eea0a9a33fcb424a1fca04659773868480c9079f14fcde3"
	I1007 13:51:58.025474  788969 logs.go:123] Gathering logs for kube-apiserver [6438e2a98b44ed7687544147d7fde2facece2b6119ba86ffa996a5b8e7019da7] ...
	I1007 13:51:58.025511  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6438e2a98b44ed7687544147d7fde2facece2b6119ba86ffa996a5b8e7019da7"
	I1007 13:51:58.099784  788969 logs.go:123] Gathering logs for kube-apiserver [dffbe9e7eda4a16ea00f685c861a1d7506d88d6ae76dc1fbd3b528f0186bf960] ...
	I1007 13:51:58.099828  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dffbe9e7eda4a16ea00f685c861a1d7506d88d6ae76dc1fbd3b528f0186bf960"
	I1007 13:51:58.156884  788969 logs.go:123] Gathering logs for container status ...
	I1007 13:51:58.156916  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1007 13:51:58.245817  788969 logs.go:123] Gathering logs for kubernetes-dashboard [02588bae23d099926c85f07f669f2281dd82887c6cda051d44c64293d25ce608] ...
	I1007 13:51:58.245848  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02588bae23d099926c85f07f669f2281dd82887c6cda051d44c64293d25ce608"
	I1007 13:51:58.293375  788969 logs.go:123] Gathering logs for storage-provisioner [94f42c6c49fd89fb1387486b3bfb41e7d9ad24f923a9cbfb3757cab9ba0d589c] ...
	I1007 13:51:58.293411  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 94f42c6c49fd89fb1387486b3bfb41e7d9ad24f923a9cbfb3757cab9ba0d589c"
	I1007 13:51:58.333307  788969 logs.go:123] Gathering logs for storage-provisioner [3e11e0da9f75ad4dd8fcceb6b095c49ccfce3438e6b64b5adf91722fb701d656] ...
	I1007 13:51:58.333338  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e11e0da9f75ad4dd8fcceb6b095c49ccfce3438e6b64b5adf91722fb701d656"
	I1007 13:51:58.374185  788969 logs.go:123] Gathering logs for kube-scheduler [1e75bdb1fcc164ca7ea09aaa49994e75347b13bbf4549844b1471555f94af297] ...
	I1007 13:51:58.374218  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1e75bdb1fcc164ca7ea09aaa49994e75347b13bbf4549844b1471555f94af297"
	I1007 13:51:58.414799  788969 logs.go:123] Gathering logs for kindnet [54c2e6b5d938cc814a93018f032c376c76d65bc1872f0fa55dd63fe950ff317f] ...
	I1007 13:51:58.414829  788969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54c2e6b5d938cc814a93018f032c376c76d65bc1872f0fa55dd63fe950ff317f"
	I1007 13:51:58.473608  788969 out.go:358] Setting ErrFile to fd 2...
	I1007 13:51:58.473634  788969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1007 13:51:58.473748  788969 out.go:270] X Problems detected in kubelet:
	W1007 13:51:58.473763  788969 out.go:270]   Oct 07 13:51:22 old-k8s-version-716021 kubelet[666]: E1007 13:51:22.208036     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:58.473777  788969 out.go:270]   Oct 07 13:51:33 old-k8s-version-716021 kubelet[666]: E1007 13:51:33.206925     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:58.473786  788969 out.go:270]   Oct 07 13:51:34 old-k8s-version-716021 kubelet[666]: E1007 13:51:34.206498     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	W1007 13:51:58.473798  788969 out.go:270]   Oct 07 13:51:47 old-k8s-version-716021 kubelet[666]: E1007 13:51:47.231399     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1007 13:51:58.473804  788969 out.go:270]   Oct 07 13:51:48 old-k8s-version-716021 kubelet[666]: E1007 13:51:48.206953     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	I1007 13:51:58.473811  788969 out.go:358] Setting ErrFile to fd 2...
	I1007 13:51:58.473817  788969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:52:08.474751  788969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:52:08.488321  788969 api_server.go:72] duration metric: took 6m2.173367121s to wait for apiserver process to appear ...
	I1007 13:52:08.488351  788969 api_server.go:88] waiting for apiserver healthz status ...
	I1007 13:52:08.490582  788969 out.go:201] 
	W1007 13:52:08.492328  788969 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W1007 13:52:08.492353  788969 out.go:270] * 
	W1007 13:52:08.493335  788969 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1007 13:52:08.495726  788969 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	b1a44caebf01b       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   05af15c10037d       dashboard-metrics-scraper-8d5bb5db8-wxkkm
	94f42c6c49fd8       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   e63807b671ca8       storage-provisioner
	02588bae23d09       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   7fcf8804d2b90       kubernetes-dashboard-cd95d586-pm244
	b47f6084fbcd0       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   2d8edbeddda57       kube-proxy-hdch9
	b970f66a7b788       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   8eb524b0aa304       coredns-74ff55c5b-fz8dj
	3e11e0da9f75a       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   e63807b671ca8       storage-provisioner
	7ded2000d7812       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   31c68e5fa68c2       busybox
	c44e873fb6306       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   534b5e3352955       kindnet-z4nmp
	1e75bdb1fcc16       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   cc42cd42180b9       kube-scheduler-old-k8s-version-716021
	6438e2a98b44e       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   5cbb50c59ca83       kube-apiserver-old-k8s-version-716021
	538e7c613d2fd       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   4c48d9c534013       etcd-old-k8s-version-716021
	a6e224cfa7000       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   bad38031296b9       kube-controller-manager-old-k8s-version-716021
	24887212160b5       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   db5520e44465a       busybox
	fefa6581f4e4c       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   da2ebe1ddf664       coredns-74ff55c5b-fz8dj
	54c2e6b5d938c       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   8639199b50925       kindnet-z4nmp
	90271f39ba89e       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   fd9abf7611289       kube-proxy-hdch9
	3a2ed10814365       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   e7cbd8b595e3d       kube-scheduler-old-k8s-version-716021
	6ee84abf1b446       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   15c46d0e64b96       kube-controller-manager-old-k8s-version-716021
	dffbe9e7eda4a       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   3328ae097f0e2       kube-apiserver-old-k8s-version-716021
	3f8d8d7911069       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   b44d36abf943f       etcd-old-k8s-version-716021
	
	
	==> containerd <==
	Oct 07 13:47:59 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:47:59.220520631Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 07 13:47:59 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:47:59.222002872Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 07 13:47:59 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:47:59.222042265Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 07 13:48:20 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:48:20.214039485Z" level=info msg="CreateContainer within sandbox \"05af15c10037d7ea18db6f86a27c932584f850c006325c6996c541a38c7745fb\" for container name:\"dashboard-metrics-scraper\"  attempt:4"
	Oct 07 13:48:20 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:48:20.243296904Z" level=info msg="CreateContainer within sandbox \"05af15c10037d7ea18db6f86a27c932584f850c006325c6996c541a38c7745fb\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"df8025f070c9a15108aeb64733a1ae4c6c18f523dce5d735af58a52d15bb96fe\""
	Oct 07 13:48:20 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:48:20.244004159Z" level=info msg="StartContainer for \"df8025f070c9a15108aeb64733a1ae4c6c18f523dce5d735af58a52d15bb96fe\""
	Oct 07 13:48:20 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:48:20.321290752Z" level=info msg="StartContainer for \"df8025f070c9a15108aeb64733a1ae4c6c18f523dce5d735af58a52d15bb96fe\" returns successfully"
	Oct 07 13:48:20 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:48:20.355269165Z" level=info msg="shim disconnected" id=df8025f070c9a15108aeb64733a1ae4c6c18f523dce5d735af58a52d15bb96fe namespace=k8s.io
	Oct 07 13:48:20 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:48:20.355339482Z" level=warning msg="cleaning up after shim disconnected" id=df8025f070c9a15108aeb64733a1ae4c6c18f523dce5d735af58a52d15bb96fe namespace=k8s.io
	Oct 07 13:48:20 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:48:20.355352659Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 07 13:48:20 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:48:20.683968893Z" level=info msg="RemoveContainer for \"5cced56ee0a9bbca8b852f15a235d02404a8bcad5e386e9cfae43e5e46636036\""
	Oct 07 13:48:20 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:48:20.690019097Z" level=info msg="RemoveContainer for \"5cced56ee0a9bbca8b852f15a235d02404a8bcad5e386e9cfae43e5e46636036\" returns successfully"
	Oct 07 13:49:30 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:30.208895536Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 13:49:30 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:30.216597710Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 07 13:49:30 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:30.228432957Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 07 13:49:30 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:30.228486675Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 07 13:49:46 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:46.209305746Z" level=info msg="CreateContainer within sandbox \"05af15c10037d7ea18db6f86a27c932584f850c006325c6996c541a38c7745fb\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Oct 07 13:49:46 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:46.229880173Z" level=info msg="CreateContainer within sandbox \"05af15c10037d7ea18db6f86a27c932584f850c006325c6996c541a38c7745fb\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228\""
	Oct 07 13:49:46 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:46.230562517Z" level=info msg="StartContainer for \"b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228\""
	Oct 07 13:49:46 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:46.296978460Z" level=info msg="StartContainer for \"b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228\" returns successfully"
	Oct 07 13:49:46 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:46.324308442Z" level=info msg="shim disconnected" id=b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228 namespace=k8s.io
	Oct 07 13:49:46 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:46.324369905Z" level=warning msg="cleaning up after shim disconnected" id=b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228 namespace=k8s.io
	Oct 07 13:49:46 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:46.324379464Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 07 13:49:46 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:46.910028059Z" level=info msg="RemoveContainer for \"df8025f070c9a15108aeb64733a1ae4c6c18f523dce5d735af58a52d15bb96fe\""
	Oct 07 13:49:46 old-k8s-version-716021 containerd[570]: time="2024-10-07T13:49:46.917534454Z" level=info msg="RemoveContainer for \"df8025f070c9a15108aeb64733a1ae4c6c18f523dce5d735af58a52d15bb96fe\" returns successfully"
	
	
	==> coredns [b970f66a7b788465fb1b5efff7470a2a13241205a4b3871615987cd5e8185c0b] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:58658 - 41045 "HINFO IN 8434989929753176642.3607726803472426746. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.042331802s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I1007 13:47:00.529333       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-07 13:46:30.528767829 +0000 UTC m=+0.037881935) (total time: 30.00044496s):
	Trace[2019727887]: [30.00044496s] [30.00044496s] END
	E1007 13:47:00.529382       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1007 13:47:00.531089       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-07 13:46:30.529563582 +0000 UTC m=+0.038677672) (total time: 30.00148428s):
	Trace[939984059]: [30.00148428s] [30.00148428s] END
	E1007 13:47:00.531126       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1007 13:47:00.533131       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-07 13:46:30.530980217 +0000 UTC m=+0.040094323) (total time: 30.002114817s):
	Trace[1474941318]: [30.002114817s] [30.002114817s] END
	E1007 13:47:00.533198       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [fefa6581f4e4cb7fefe7289a78cf684582ea646cd5283484696218c2863765ed] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:40669 - 22419 "HINFO IN 3949177277945759547.2442513995244647269. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012140292s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-716021
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-716021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aced4bb0374ad4c19753bf24eee8bc7aa8774c9c
	                    minikube.k8s.io/name=old-k8s-version-716021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_07T13_43_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Oct 2024 13:43:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-716021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Oct 2024 13:52:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Oct 2024 13:47:17 +0000   Mon, 07 Oct 2024 13:43:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Oct 2024 13:47:17 +0000   Mon, 07 Oct 2024 13:43:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Oct 2024 13:47:17 +0000   Mon, 07 Oct 2024 13:43:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Oct 2024 13:47:17 +0000   Mon, 07 Oct 2024 13:43:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-716021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 f005b51b0a64468d8335f0369cec4f73
	  System UUID:                fb3e183d-5e41-4e52-a02f-96bcb2f620a0
	  Boot ID:                    21f414e1-c967-4988-b7c1-53380c0b20c8
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m37s
	  kube-system                 coredns-74ff55c5b-fz8dj                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m12s
	  kube-system                 etcd-old-k8s-version-716021                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m18s
	  kube-system                 kindnet-z4nmp                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m11s
	  kube-system                 kube-apiserver-old-k8s-version-716021             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-controller-manager-old-k8s-version-716021    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-proxy-hdch9                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m11s
	  kube-system                 kube-scheduler-old-k8s-version-716021             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 metrics-server-9975d5f86-b7ct2                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m26s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-wxkkm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-pm244               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m37s (x5 over 8m37s)  kubelet     Node old-k8s-version-716021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m37s (x4 over 8m37s)  kubelet     Node old-k8s-version-716021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m37s (x4 over 8m37s)  kubelet     Node old-k8s-version-716021 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m19s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m19s                  kubelet     Node old-k8s-version-716021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m19s                  kubelet     Node old-k8s-version-716021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m19s                  kubelet     Node old-k8s-version-716021 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m18s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m12s                  kubelet     Node old-k8s-version-716021 status is now: NodeReady
	  Normal  Starting                 8m10s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m56s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-716021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s (x8 over 5m56s)  kubelet     Node old-k8s-version-716021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s (x7 over 5m56s)  kubelet     Node old-k8s-version-716021 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m56s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m40s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Oct 7 12:30] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	[  +0.109813] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	
	
	==> etcd [3f8d8d7911069dffad8bf7ce9156d34436c105ed532c7439e0b6bda21c43e87c] <==
	raft2024/10/07 13:43:34 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/10/07 13:43:34 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/10/07 13:43:34 INFO: ea7e25599daad906 became leader at term 2
	raft2024/10/07 13:43:34 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-10-07 13:43:34.142265 I | etcdserver: published {Name:old-k8s-version-716021 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-10-07 13:43:34.142335 I | embed: ready to serve client requests
	2024-10-07 13:43:34.142603 I | etcdserver: setting up the initial cluster version to 3.4
	2024-10-07 13:43:34.142771 I | embed: ready to serve client requests
	2024-10-07 13:43:34.144100 I | embed: serving client requests on 192.168.76.2:2379
	2024-10-07 13:43:34.147113 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-10-07 13:43:34.147372 I | etcdserver/api: enabled capabilities for version 3.4
	2024-10-07 13:43:34.186057 I | embed: serving client requests on 127.0.0.1:2379
	2024-10-07 13:43:53.718746 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:43:54.748513 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:44:04.748517 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:44:14.748454 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:44:24.748557 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:44:34.748533 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:44:44.748560 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:44:54.748464 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:45:04.748466 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:45:14.748526 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:45:24.748501 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:45:34.748561 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:45:44.748587 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [538e7c613d2fdfcf8bdf655918adbdaa8c80e7d22ee153d71d980bad173f6cd1] <==
	2024-10-07 13:48:02.748132 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:48:12.747821 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:48:22.747980 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:48:32.747836 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:48:42.748056 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:48:52.747895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:49:02.748005 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:49:12.747779 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:49:22.747934 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:49:32.747997 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:49:42.748060 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:49:52.747876 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:50:02.747754 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:50:12.747913 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:50:22.747862 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:50:32.747906 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:50:42.747837 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:50:52.747941 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:51:02.747937 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:51:12.748079 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:51:22.747904 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:51:32.747983 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:51:42.747883 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:51:52.747893 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-07 13:52:02.748061 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 13:52:10 up  3:34,  0 users,  load average: 0.98, 1.93, 2.49
	Linux old-k8s-version-716021 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [54c2e6b5d938cc814a93018f032c376c76d65bc1872f0fa55dd63fe950ff317f] <==
	I1007 13:44:02.819896       1 controller.go:338] Waiting for informer caches to sync
	I1007 13:44:02.819916       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1007 13:44:03.020245       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1007 13:44:03.020282       1 metrics.go:61] Registering metrics
	I1007 13:44:03.020361       1 controller.go:374] Syncing nftables rules
	I1007 13:44:12.822538       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:44:12.822658       1 main.go:299] handling current node
	I1007 13:44:22.818808       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:44:22.818845       1 main.go:299] handling current node
	I1007 13:44:32.823840       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:44:32.823875       1 main.go:299] handling current node
	I1007 13:44:42.826820       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:44:42.827040       1 main.go:299] handling current node
	I1007 13:44:52.819235       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:44:52.819268       1 main.go:299] handling current node
	I1007 13:45:02.818818       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:45:02.818862       1 main.go:299] handling current node
	I1007 13:45:12.827139       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:45:12.827176       1 main.go:299] handling current node
	I1007 13:45:22.821794       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:45:22.821832       1 main.go:299] handling current node
	I1007 13:45:32.825768       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:45:32.825802       1 main.go:299] handling current node
	I1007 13:45:42.824677       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:45:42.824711       1 main.go:299] handling current node
	
	
	==> kindnet [c44e873fb63063161eea0a9a33fcb424a1fca04659773868480c9079f14fcde3] <==
	I1007 13:50:09.930200       1 main.go:299] handling current node
	I1007 13:50:19.932031       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:50:19.932061       1 main.go:299] handling current node
	I1007 13:50:29.923177       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:50:29.923213       1 main.go:299] handling current node
	I1007 13:50:39.926956       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:50:39.926993       1 main.go:299] handling current node
	I1007 13:50:49.925884       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:50:49.925991       1 main.go:299] handling current node
	I1007 13:50:59.931208       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:50:59.931244       1 main.go:299] handling current node
	I1007 13:51:09.929077       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:51:09.929116       1 main.go:299] handling current node
	I1007 13:51:19.929785       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:51:19.929823       1 main.go:299] handling current node
	I1007 13:51:29.923425       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:51:29.923463       1 main.go:299] handling current node
	I1007 13:51:39.929304       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:51:39.929341       1 main.go:299] handling current node
	I1007 13:51:49.932255       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:51:49.932537       1 main.go:299] handling current node
	I1007 13:51:59.932049       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:51:59.932153       1 main.go:299] handling current node
	I1007 13:52:09.929752       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1007 13:52:09.929785       1 main.go:299] handling current node
	
	
	==> kube-apiserver [6438e2a98b44ed7687544147d7fde2facece2b6119ba86ffa996a5b8e7019da7] <==
	I1007 13:48:59.163886       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 13:48:59.163897       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1007 13:49:30.677024       1 handler_proxy.go:102] no RequestInfo found in the context
	E1007 13:49:30.677103       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1007 13:49:30.677112       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:49:31.478845       1 client.go:360] parsed scheme: "passthrough"
	I1007 13:49:31.478891       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 13:49:31.478901       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1007 13:50:03.435282       1 client.go:360] parsed scheme: "passthrough"
	I1007 13:50:03.435327       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 13:50:03.435336       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1007 13:50:38.788774       1 client.go:360] parsed scheme: "passthrough"
	I1007 13:50:38.788819       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 13:50:38.788828       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1007 13:51:11.378982       1 client.go:360] parsed scheme: "passthrough"
	I1007 13:51:11.379027       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 13:51:11.379036       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1007 13:51:27.833490       1 handler_proxy.go:102] no RequestInfo found in the context
	E1007 13:51:27.833778       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1007 13:51:27.833790       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1007 13:51:50.957540       1 client.go:360] parsed scheme: "passthrough"
	I1007 13:51:50.957586       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 13:51:50.957625       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [dffbe9e7eda4a16ea00f685c861a1d7506d88d6ae76dc1fbd3b528f0186bf960] <==
	I1007 13:43:41.105036       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1007 13:43:41.105070       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1007 13:43:41.137501       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1007 13:43:41.143835       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1007 13:43:41.143859       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1007 13:43:41.649371       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1007 13:43:41.694835       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1007 13:43:41.789937       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1007 13:43:41.791169       1 controller.go:606] quota admission added evaluator for: endpoints
	I1007 13:43:41.795886       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1007 13:43:42.742090       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1007 13:43:43.394655       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1007 13:43:43.444123       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1007 13:43:51.856855       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1007 13:43:58.803051       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1007 13:43:59.009701       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1007 13:44:07.521562       1 client.go:360] parsed scheme: "passthrough"
	I1007 13:44:07.521607       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 13:44:07.521616       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1007 13:44:37.788665       1 client.go:360] parsed scheme: "passthrough"
	I1007 13:44:37.788780       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 13:44:37.788809       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1007 13:45:11.255518       1 client.go:360] parsed scheme: "passthrough"
	I1007 13:45:11.255573       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1007 13:45:11.255583       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [6ee84abf1b4464314c9cb9e84d60de9b2b00461bb97fe11808df52c7e0f87771] <==
	I1007 13:43:58.970244       1 shared_informer.go:247] Caches are synced for attach detach 
	I1007 13:43:58.970690       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	I1007 13:43:58.970885       1 shared_informer.go:247] Caches are synced for taint 
	I1007 13:43:58.970971       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	I1007 13:43:58.970280       1 shared_informer.go:247] Caches are synced for daemon sets 
	W1007 13:43:58.977772       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-716021. Assuming now as a timestamp.
	I1007 13:43:58.987224       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1007 13:43:58.978236       1 range_allocator.go:373] Set node old-k8s-version-716021 PodCIDR to [10.244.0.0/24]
	I1007 13:43:58.979484       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I1007 13:43:58.979579       1 event.go:291] "Event occurred" object="old-k8s-version-716021" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-716021 event: Registered Node old-k8s-version-716021 in Controller"
	I1007 13:43:59.034859       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-hdch9"
	I1007 13:43:59.039425       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-z4nmp"
	I1007 13:43:59.100403       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E1007 13:43:59.102041       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"b68eb93c-4844-460e-9240-1b835468c690", ResourceVersion:"258", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63863905423, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001301b60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001301b80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001301ba0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40015461c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001301
bc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001301be0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001301c20)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400142d860), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40009537c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40002dc8c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000716ef0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000953828)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E1007 13:43:59.104992       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"cbda029c-7cd1-4301-80cf-4e3836ef7b97", ResourceVersion:"266", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63863905424, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001301c80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001301ca0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001301cc0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001301ce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001301d00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001301d20), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001301d40)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001301d80)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x400142dc80), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000953a68), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40002dc930), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000716ef8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000953ac0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	E1007 13:43:59.147243       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"cbda029c-7cd1-4301-80cf-4e3836ef7b97", ResourceVersion:"418", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63863905424, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20240813-c6f155d6\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001e93140), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001e93160)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001e93180), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001e931a0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001e931c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generatio
n:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001e931e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:
(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001e93200), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlo
ckStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CS
I:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001e93220), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Q
uobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20240813-c6f155d6", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001e93240)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001e93280)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i
:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", Sub
Path:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001e88ea0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001e86fb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004cd650), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinit
y:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000115a08)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001e87000)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v
1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1007 13:43:59.415561       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1007 13:43:59.415586       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1007 13:43:59.416416       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1007 13:44:00.947138       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1007 13:44:00.972043       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-86shr"
	I1007 13:44:03.987678       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1007 13:45:43.753088       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E1007 13:45:44.050231       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E1007 13:45:44.050331       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [a6e224cfa70000e35a720f8c9ed9661a70147bc5dc65460934496ec9b288fe06] <==
	I1007 13:47:50.427466       1 request.go:655] Throttling request took 1.047952556s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1007 13:47:51.279079       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 13:48:17.315614       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 13:48:22.929655       1 request.go:655] Throttling request took 1.048184797s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1?timeout=32s
	W1007 13:48:23.780985       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 13:48:47.817497       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 13:48:55.431448       1 request.go:655] Throttling request took 1.048352679s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W1007 13:48:56.283485       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 13:49:18.319381       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 13:49:27.933845       1 request.go:655] Throttling request took 1.048382921s, request: GET:https://192.168.76.2:8443/apis/storage.k8s.io/v1beta1?timeout=32s
	W1007 13:49:28.785561       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 13:49:48.822191       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 13:50:00.437247       1 request.go:655] Throttling request took 1.049197471s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W1007 13:50:01.287806       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 13:50:19.324383       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 13:50:32.938326       1 request.go:655] Throttling request took 1.04737806s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1007 13:50:33.789820       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 13:50:49.826799       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 13:51:05.441439       1 request.go:655] Throttling request took 1.049329447s, request: GET:https://192.168.76.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
	W1007 13:51:06.291943       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 13:51:20.329125       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 13:51:37.942330       1 request.go:655] Throttling request took 1.048381579s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W1007 13:51:38.793874       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1007 13:51:50.831944       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1007 13:52:10.444901       1 request.go:655] Throttling request took 1.013193807s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	
	
	==> kube-proxy [90271f39ba89eca0f9f411179e611fffa8cb7092df3cd7385b2489d67eb7a32d] <==
	I1007 13:44:00.145956       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1007 13:44:00.146098       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1007 13:44:00.250972       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1007 13:44:00.251098       1 server_others.go:185] Using iptables Proxier.
	I1007 13:44:00.251357       1 server.go:650] Version: v1.20.0
	I1007 13:44:00.252035       1 config.go:315] Starting service config controller
	I1007 13:44:00.252059       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1007 13:44:00.255433       1 config.go:224] Starting endpoint slice config controller
	I1007 13:44:00.255465       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1007 13:44:00.352286       1 shared_informer.go:247] Caches are synced for service config 
	I1007 13:44:00.359334       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [b47f6084fbcd06d9de3d640a50b2eedabdcfa0e9e99795313dbbf409ba0b34ba] <==
	I1007 13:46:30.904457       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1007 13:46:30.904536       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1007 13:46:30.927102       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1007 13:46:30.927291       1 server_others.go:185] Using iptables Proxier.
	I1007 13:46:30.927853       1 server.go:650] Version: v1.20.0
	I1007 13:46:30.928705       1 config.go:315] Starting service config controller
	I1007 13:46:30.928842       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1007 13:46:30.928945       1 config.go:224] Starting endpoint slice config controller
	I1007 13:46:30.929023       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1007 13:46:31.029060       1 shared_informer.go:247] Caches are synced for service config 
	I1007 13:46:31.029292       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [1e75bdb1fcc164ca7ea09aaa49994e75347b13bbf4549844b1471555f94af297] <==
	I1007 13:46:18.291780       1 serving.go:331] Generated self-signed cert in-memory
	W1007 13:46:26.724674       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1007 13:46:26.727853       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 13:46:26.727887       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1007 13:46:26.727896       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1007 13:46:26.968380       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1007 13:46:26.981976       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 13:46:26.982006       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 13:46:26.982029       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1007 13:46:27.182261       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [3a2ed108143653900fc42a927e9971546f5f58f58c845162e9ad03a74ef4c19f] <==
	W1007 13:43:40.253157       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1007 13:43:40.253335       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1007 13:43:40.253355       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1007 13:43:40.253361       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1007 13:43:40.349431       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1007 13:43:40.349769       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 13:43:40.353733       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1007 13:43:40.353946       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1007 13:43:40.379707       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 13:43:40.380015       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1007 13:43:40.380222       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1007 13:43:40.381651       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1007 13:43:40.382085       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 13:43:40.382292       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1007 13:43:40.382553       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1007 13:43:40.382771       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1007 13:43:40.382798       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1007 13:43:40.397451       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 13:43:40.397815       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1007 13:43:40.397893       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1007 13:43:41.223490       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1007 13:43:41.257068       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1007 13:43:41.274574       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1007 13:43:41.366758       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1007 13:43:41.854593       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Oct 07 13:50:19 old-k8s-version-716021 kubelet[666]: E1007 13:50:19.206748     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 13:50:28 old-k8s-version-716021 kubelet[666]: I1007 13:50:28.206863     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228
	Oct 07 13:50:28 old-k8s-version-716021 kubelet[666]: E1007 13:50:28.207831     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	Oct 07 13:50:30 old-k8s-version-716021 kubelet[666]: E1007 13:50:30.211659     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 13:50:40 old-k8s-version-716021 kubelet[666]: I1007 13:50:40.210036     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228
	Oct 07 13:50:40 old-k8s-version-716021 kubelet[666]: E1007 13:50:40.210379     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	Oct 07 13:50:45 old-k8s-version-716021 kubelet[666]: E1007 13:50:45.207104     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 13:50:53 old-k8s-version-716021 kubelet[666]: I1007 13:50:53.206149     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228
	Oct 07 13:50:53 old-k8s-version-716021 kubelet[666]: E1007 13:50:53.206980     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	Oct 07 13:50:57 old-k8s-version-716021 kubelet[666]: E1007 13:50:57.206804     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 13:51:08 old-k8s-version-716021 kubelet[666]: I1007 13:51:08.206132     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228
	Oct 07 13:51:08 old-k8s-version-716021 kubelet[666]: E1007 13:51:08.206532     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	Oct 07 13:51:08 old-k8s-version-716021 kubelet[666]: E1007 13:51:08.207812     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 13:51:22 old-k8s-version-716021 kubelet[666]: E1007 13:51:22.206879     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 13:51:22 old-k8s-version-716021 kubelet[666]: I1007 13:51:22.207713     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228
	Oct 07 13:51:22 old-k8s-version-716021 kubelet[666]: E1007 13:51:22.208036     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	Oct 07 13:51:33 old-k8s-version-716021 kubelet[666]: E1007 13:51:33.206925     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 13:51:34 old-k8s-version-716021 kubelet[666]: I1007 13:51:34.206073     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228
	Oct 07 13:51:34 old-k8s-version-716021 kubelet[666]: E1007 13:51:34.206498     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	Oct 07 13:51:47 old-k8s-version-716021 kubelet[666]: E1007 13:51:47.231399     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 07 13:51:48 old-k8s-version-716021 kubelet[666]: I1007 13:51:48.206650     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228
	Oct 07 13:51:48 old-k8s-version-716021 kubelet[666]: E1007 13:51:48.206953     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	Oct 07 13:52:01 old-k8s-version-716021 kubelet[666]: I1007 13:52:01.206118     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: b1a44caebf01b9247c1eb127bdfd3b98f523126bafeeb49a9daec6b059aae228
	Oct 07 13:52:01 old-k8s-version-716021 kubelet[666]: E1007 13:52:01.206466     666 pod_workers.go:191] Error syncing pod d8c979bf-7b30-4283-aaeb-85e8808af24f ("dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-wxkkm_kubernetes-dashboard(d8c979bf-7b30-4283-aaeb-85e8808af24f)"
	Oct 07 13:52:03 old-k8s-version-716021 kubelet[666]: E1007 13:52:03.206958     666 pod_workers.go:191] Error syncing pod 184eab58-b771-4e54-8c7c-46985955b403 ("metrics-server-9975d5f86-b7ct2_kube-system(184eab58-b771-4e54-8c7c-46985955b403)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [02588bae23d099926c85f07f669f2281dd82887c6cda051d44c64293d25ce608] <==
	2024/10/07 13:46:54 Using namespace: kubernetes-dashboard
	2024/10/07 13:46:54 Using in-cluster config to connect to apiserver
	2024/10/07 13:46:54 Using secret token for csrf signing
	2024/10/07 13:46:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/07 13:46:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/07 13:46:54 Successful initial request to the apiserver, version: v1.20.0
	2024/10/07 13:46:54 Generating JWE encryption key
	2024/10/07 13:46:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/07 13:46:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/07 13:46:54 Initializing JWE encryption key from synchronized object
	2024/10/07 13:46:54 Creating in-cluster Sidecar client
	2024/10/07 13:46:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 13:46:54 Serving insecurely on HTTP port: 9090
	2024/10/07 13:47:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 13:47:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 13:48:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 13:48:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 13:49:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 13:49:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 13:50:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 13:50:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 13:51:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 13:51:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/07 13:46:54 Starting overwatch
	
	
	==> storage-provisioner [3e11e0da9f75ad4dd8fcceb6b095c49ccfce3438e6b64b5adf91722fb701d656] <==
	I1007 13:46:30.597251       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1007 13:47:00.602012       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [94f42c6c49fd89fb1387486b3bfb41e7d9ad24f923a9cbfb3757cab9ba0d589c] <==
	I1007 13:47:12.309683       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1007 13:47:12.330891       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1007 13:47:12.331399       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1007 13:47:29.798616       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1007 13:47:29.798916       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-716021_239d44ce-d2d6-4f0e-b00b-4cd5ebdfeb87!
	I1007 13:47:29.799724       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5cb77622-d157-4ccc-9b60-30abcf10b5f1", APIVersion:"v1", ResourceVersion:"841", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-716021_239d44ce-d2d6-4f0e-b00b-4cd5ebdfeb87 became leader
	I1007 13:47:29.902403       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-716021_239d44ce-d2d6-4f0e-b00b-4cd5ebdfeb87!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-716021 -n old-k8s-version-716021
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-716021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-b7ct2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-716021 describe pod metrics-server-9975d5f86-b7ct2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-716021 describe pod metrics-server-9975d5f86-b7ct2: exit status 1 (107.638485ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-b7ct2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-716021 describe pod metrics-server-9975d5f86-b7ct2: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (374.37s)

                                                
                                    

Test pass (299/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.44
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.17
12 TestDownloadOnly/v1.31.1/json-events 5.28
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
27 TestAddons/Setup 221.07
31 TestAddons/serial/GCPAuth/Namespaces 0.19
32 TestAddons/serial/GCPAuth/PullSecret 10.89
34 TestAddons/parallel/Registry 18.12
35 TestAddons/parallel/Ingress 20.09
36 TestAddons/parallel/InspektorGadget 11.11
37 TestAddons/parallel/MetricsServer 5.8
39 TestAddons/parallel/CSI 31.2
40 TestAddons/parallel/Headlamp 16.3
41 TestAddons/parallel/CloudSpanner 6.74
42 TestAddons/parallel/LocalPath 53.25
43 TestAddons/parallel/NvidiaDevicePlugin 5.85
44 TestAddons/parallel/Yakd 11.84
45 TestAddons/StoppedEnableDisable 12.31
46 TestCertOptions 36.11
47 TestCertExpiration 232.3
49 TestForceSystemdFlag 42.41
50 TestForceSystemdEnv 41.66
51 TestDockerEnvContainerd 45.63
56 TestErrorSpam/setup 30.76
57 TestErrorSpam/start 0.78
58 TestErrorSpam/status 1.15
59 TestErrorSpam/pause 1.83
60 TestErrorSpam/unpause 1.86
61 TestErrorSpam/stop 1.47
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 48.89
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.29
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.17
73 TestFunctional/serial/CacheCmd/cache/add_local 1.23
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.03
78 TestFunctional/serial/CacheCmd/cache/delete 0.13
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.19
81 TestFunctional/serial/ExtraConfig 45.43
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.85
84 TestFunctional/serial/LogsFileCmd 1.91
85 TestFunctional/serial/InvalidService 5.57
87 TestFunctional/parallel/ConfigCmd 0.52
88 TestFunctional/parallel/DashboardCmd 10.9
89 TestFunctional/parallel/DryRun 0.43
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1.14
95 TestFunctional/parallel/ServiceCmdConnect 11.7
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 26.15
99 TestFunctional/parallel/SSHCmd 0.64
100 TestFunctional/parallel/CpCmd 2.29
102 TestFunctional/parallel/FileSync 0.33
103 TestFunctional/parallel/CertSync 2.12
107 TestFunctional/parallel/NodeLabels 0.12
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
111 TestFunctional/parallel/License 0.26
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.44
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
125 TestFunctional/parallel/ServiceCmd/List 0.67
126 TestFunctional/parallel/ProfileCmd/profile_list 0.53
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.56
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
130 TestFunctional/parallel/MountCmd/any-port 6.67
131 TestFunctional/parallel/ServiceCmd/Format 0.52
132 TestFunctional/parallel/ServiceCmd/URL 0.45
133 TestFunctional/parallel/MountCmd/specific-port 2.2
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.86
135 TestFunctional/parallel/Version/short 0.09
136 TestFunctional/parallel/Version/components 1.42
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.3
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.71
142 TestFunctional/parallel/ImageCommands/Setup 0.73
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.2
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.35
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.66
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.49
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.93
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 127.68
160 TestMultiControlPlane/serial/DeployApp 32.51
161 TestMultiControlPlane/serial/PingHostFromPods 1.68
162 TestMultiControlPlane/serial/AddWorkerNode 20.72
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.02
165 TestMultiControlPlane/serial/CopyFile 19.77
166 TestMultiControlPlane/serial/StopSecondaryNode 12.96
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
168 TestMultiControlPlane/serial/RestartSecondaryNode 19.46
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.02
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 152.59
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.71
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
173 TestMultiControlPlane/serial/StopCluster 36.26
174 TestMultiControlPlane/serial/RestartCluster 64.86
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
176 TestMultiControlPlane/serial/AddSecondaryNode 46.06
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.07
181 TestJSONOutput/start/Command 47.33
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.78
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.68
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.77
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.24
206 TestKicCustomNetwork/create_custom_network 40.93
207 TestKicCustomNetwork/use_default_bridge_network 33.44
208 TestKicExistingNetwork 35.07
209 TestKicCustomSubnet 33.46
210 TestKicStaticIP 31.74
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 67.42
215 TestMountStart/serial/StartWithMountFirst 9.14
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 9.06
218 TestMountStart/serial/VerifyMountSecond 0.27
219 TestMountStart/serial/DeleteFirst 1.62
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.22
222 TestMountStart/serial/RestartStopped 7.43
223 TestMountStart/serial/VerifyMountPostStop 0.28
226 TestMultiNode/serial/FreshStart2Nodes 64.76
227 TestMultiNode/serial/DeployApp2Nodes 18.07
228 TestMultiNode/serial/PingHostFrom2Pods 1.01
229 TestMultiNode/serial/AddNode 18.17
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.7
232 TestMultiNode/serial/CopyFile 10.21
233 TestMultiNode/serial/StopNode 2.34
234 TestMultiNode/serial/StartAfterStop 10.38
235 TestMultiNode/serial/RestartKeepsNodes 93.71
236 TestMultiNode/serial/DeleteNode 5.55
237 TestMultiNode/serial/StopMultiNode 24.04
238 TestMultiNode/serial/RestartMultiNode 54.27
239 TestMultiNode/serial/ValidateNameConflict 33.95
244 TestPreload 114.09
246 TestScheduledStopUnix 109.58
249 TestInsufficientStorage 10.4
250 TestRunningBinaryUpgrade 83.8
252 TestKubernetesUpgrade 355.29
253 TestMissingContainerUpgrade 179.12
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 40.56
257 TestNoKubernetes/serial/StartWithStopK8s 17.71
258 TestNoKubernetes/serial/Start 8.98
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
260 TestNoKubernetes/serial/ProfileList 0.98
261 TestNoKubernetes/serial/Stop 1.21
262 TestNoKubernetes/serial/StartNoArgs 7.11
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
264 TestStoppedBinaryUpgrade/Setup 0.95
265 TestStoppedBinaryUpgrade/Upgrade 85.21
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.18
275 TestPause/serial/Start 53.15
276 TestPause/serial/SecondStartNoReconfiguration 7.82
277 TestPause/serial/Pause 1.16
278 TestPause/serial/VerifyStatus 0.49
279 TestPause/serial/Unpause 0.95
280 TestPause/serial/PauseAgain 0.99
281 TestPause/serial/DeletePaused 3.05
282 TestPause/serial/VerifyDeletedResources 0.48
290 TestNetworkPlugins/group/false 4.66
295 TestStartStop/group/old-k8s-version/serial/FirstStart 155.19
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.82
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.76
298 TestStartStop/group/old-k8s-version/serial/Stop 12.86
300 TestStartStop/group/no-preload/serial/FirstStart 75.29
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
303 TestStartStop/group/no-preload/serial/DeployApp 8.41
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
305 TestStartStop/group/no-preload/serial/Stop 12.09
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/no-preload/serial/SecondStart 277.2
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.13
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
312 TestStartStop/group/no-preload/serial/Pause 3.26
313 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.12
315 TestStartStop/group/embed-certs/serial/FirstStart 83.38
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.29
317 TestStartStop/group/old-k8s-version/serial/Pause 4.33
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.85
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
321 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
322 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.15
323 TestStartStop/group/embed-certs/serial/DeployApp 8.34
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.69
326 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.66
327 TestStartStop/group/embed-certs/serial/Stop 12.38
328 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
329 TestStartStop/group/embed-certs/serial/SecondStart 291.83
330 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
332 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
333 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.16
335 TestStartStop/group/newest-cni/serial/FirstStart 39.15
336 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
337 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
338 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.38
339 TestStartStop/group/embed-certs/serial/Pause 4.67
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.77
342 TestStartStop/group/newest-cni/serial/Stop 1.55
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
344 TestStartStop/group/newest-cni/serial/SecondStart 22.11
345 TestNetworkPlugins/group/auto/Start 101.32
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
349 TestStartStop/group/newest-cni/serial/Pause 4.63
350 TestNetworkPlugins/group/kindnet/Start 94.57
351 TestNetworkPlugins/group/auto/KubeletFlags 0.31
352 TestNetworkPlugins/group/auto/NetCatPod 10.31
353 TestNetworkPlugins/group/auto/DNS 0.2
354 TestNetworkPlugins/group/auto/Localhost 0.17
355 TestNetworkPlugins/group/auto/HairPin 0.19
356 TestNetworkPlugins/group/kindnet/ControllerPod 6
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
358 TestNetworkPlugins/group/kindnet/NetCatPod 9.38
359 TestNetworkPlugins/group/calico/Start 74.46
360 TestNetworkPlugins/group/kindnet/DNS 0.3
361 TestNetworkPlugins/group/kindnet/Localhost 0.22
362 TestNetworkPlugins/group/kindnet/HairPin 0.24
363 TestNetworkPlugins/group/custom-flannel/Start 55.88
364 TestNetworkPlugins/group/calico/ControllerPod 6.01
365 TestNetworkPlugins/group/calico/KubeletFlags 0.3
366 TestNetworkPlugins/group/calico/NetCatPod 10.26
367 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.3
369 TestNetworkPlugins/group/calico/DNS 0.2
370 TestNetworkPlugins/group/calico/Localhost 0.16
371 TestNetworkPlugins/group/calico/HairPin 0.16
372 TestNetworkPlugins/group/custom-flannel/DNS 0.21
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
375 TestNetworkPlugins/group/enable-default-cni/Start 51.86
376 TestNetworkPlugins/group/flannel/Start 60.82
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.44
384 TestNetworkPlugins/group/flannel/NetCatPod 10.36
385 TestNetworkPlugins/group/flannel/DNS 0.27
386 TestNetworkPlugins/group/flannel/Localhost 0.21
387 TestNetworkPlugins/group/flannel/HairPin 0.24
388 TestNetworkPlugins/group/bridge/Start 77.74
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
390 TestNetworkPlugins/group/bridge/NetCatPod 9.27
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.16
393 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (8.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-567694 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-567694 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.434765611s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1007 12:55:44.614766  580163 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1007 12:55:44.614848  580163 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-567694
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-567694: exit status 85 (77.722178ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-567694 | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC |          |
	|         | -p download-only-567694        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:55:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:55:36.226407  580168 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:55:36.226619  580168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:55:36.226634  580168 out.go:358] Setting ErrFile to fd 2...
	I1007 12:55:36.226639  580168 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:55:36.226908  580168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
	W1007 12:55:36.227049  580168 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18424-574640/.minikube/config/config.json: open /home/jenkins/minikube-integration/18424-574640/.minikube/config/config.json: no such file or directory
	I1007 12:55:36.227442  580168 out.go:352] Setting JSON to true
	I1007 12:55:36.228365  580168 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9485,"bootTime":1728296251,"procs":159,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1007 12:55:36.228438  580168 start.go:139] virtualization:  
	I1007 12:55:36.231213  580168 out.go:97] [download-only-567694] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 12:55:36.231640  580168 notify.go:220] Checking for updates...
	W1007 12:55:36.231683  580168 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball: no such file or directory
	I1007 12:55:36.233926  580168 out.go:169] MINIKUBE_LOCATION=18424
	I1007 12:55:36.235976  580168 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:55:36.237778  580168 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 12:55:36.239964  580168 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	I1007 12:55:36.241828  580168 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1007 12:55:36.245527  580168 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 12:55:36.245909  580168 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:55:36.268249  580168 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 12:55:36.268384  580168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:55:36.325934  580168 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 12:55:36.316501308 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:55:36.326043  580168 docker.go:318] overlay module found
	I1007 12:55:36.328078  580168 out.go:97] Using the docker driver based on user configuration
	I1007 12:55:36.328108  580168 start.go:297] selected driver: docker
	I1007 12:55:36.328115  580168 start.go:901] validating driver "docker" against <nil>
	I1007 12:55:36.328268  580168 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:55:36.384154  580168 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 12:55:36.374545805 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:55:36.384357  580168 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:55:36.384655  580168 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1007 12:55:36.384808  580168 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 12:55:36.387018  580168 out.go:169] Using Docker driver with root privileges
	I1007 12:55:36.388972  580168 cni.go:84] Creating CNI manager for ""
	I1007 12:55:36.389089  580168 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 12:55:36.389104  580168 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 12:55:36.389191  580168 start.go:340] cluster config:
	{Name:download-only-567694 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-567694 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:55:36.391211  580168 out.go:97] Starting "download-only-567694" primary control-plane node in "download-only-567694" cluster
	I1007 12:55:36.391233  580168 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1007 12:55:36.393084  580168 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1007 12:55:36.393112  580168 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1007 12:55:36.393281  580168 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 12:55:36.408452  580168 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 12:55:36.408622  580168 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 12:55:36.408718  580168 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 12:55:36.460402  580168 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1007 12:55:36.460448  580168 cache.go:56] Caching tarball of preloaded images
	I1007 12:55:36.460623  580168 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1007 12:55:36.463007  580168 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1007 12:55:36.463028  580168 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1007 12:55:36.550367  580168 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1007 12:55:40.736591  580168 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1007 12:55:40.736772  580168 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-567694 host does not exist
	  To start a cluster, run: "minikube start -p download-only-567694"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-567694
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-985583 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-985583 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.276139117s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1007 12:55:50.351492  580163 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I1007 12:55:50.351537  580163 preload.go:146] Found local preload: /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-985583
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-985583: exit status 85 (70.197626ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-567694 | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC |                     |
	|         | -p download-only-567694        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC | 07 Oct 24 12:55 UTC |
	| delete  | -p download-only-567694        | download-only-567694 | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC | 07 Oct 24 12:55 UTC |
	| start   | -o=json --download-only        | download-only-985583 | jenkins | v1.34.0 | 07 Oct 24 12:55 UTC |                     |
	|         | -p download-only-985583        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/07 12:55:45
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1007 12:55:45.153171  580367 out.go:345] Setting OutFile to fd 1 ...
	I1007 12:55:45.153357  580367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:55:45.153377  580367 out.go:358] Setting ErrFile to fd 2...
	I1007 12:55:45.153383  580367 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 12:55:45.153752  580367 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
	I1007 12:55:45.154269  580367 out.go:352] Setting JSON to true
	I1007 12:55:45.155329  580367 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":9494,"bootTime":1728296251,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1007 12:55:45.155430  580367 start.go:139] virtualization:  
	I1007 12:55:45.158040  580367 out.go:97] [download-only-985583] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 12:55:45.158500  580367 notify.go:220] Checking for updates...
	I1007 12:55:45.161080  580367 out.go:169] MINIKUBE_LOCATION=18424
	I1007 12:55:45.168193  580367 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 12:55:45.170703  580367 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 12:55:45.172886  580367 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	I1007 12:55:45.175182  580367 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1007 12:55:45.179996  580367 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1007 12:55:45.180392  580367 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 12:55:45.210171  580367 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 12:55:45.210405  580367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:55:45.276468  580367 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-07 12:55:45.261316978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:55:45.276614  580367 docker.go:318] overlay module found
	I1007 12:55:45.279531  580367 out.go:97] Using the docker driver based on user configuration
	I1007 12:55:45.279594  580367 start.go:297] selected driver: docker
	I1007 12:55:45.279603  580367 start.go:901] validating driver "docker" against <nil>
	I1007 12:55:45.279807  580367 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 12:55:45.351283  580367 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-07 12:55:45.340648442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 12:55:45.351511  580367 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1007 12:55:45.351856  580367 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1007 12:55:45.352069  580367 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1007 12:55:45.354177  580367 out.go:169] Using Docker driver with root privileges
	I1007 12:55:45.356171  580367 cni.go:84] Creating CNI manager for ""
	I1007 12:55:45.356255  580367 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1007 12:55:45.356271  580367 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1007 12:55:45.356371  580367 start.go:340] cluster config:
	{Name:download-only-985583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-985583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 12:55:45.358341  580367 out.go:97] Starting "download-only-985583" primary control-plane node in "download-only-985583" cluster
	I1007 12:55:45.358373  580367 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1007 12:55:45.360402  580367 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1007 12:55:45.360450  580367 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 12:55:45.360679  580367 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1007 12:55:45.379338  580367 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1007 12:55:45.379704  580367 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1007 12:55:45.379739  580367 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1007 12:55:45.379751  580367 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1007 12:55:45.379767  580367 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1007 12:55:45.416574  580367 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1007 12:55:45.416606  580367 cache.go:56] Caching tarball of preloaded images
	I1007 12:55:45.416791  580367 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 12:55:45.419124  580367 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1007 12:55:45.419160  580367 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1007 12:55:45.497230  580367 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1007 12:55:48.845323  580367 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1007 12:55:48.845424  580367 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/18424-574640/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1007 12:55:49.703573  580367 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1007 12:55:49.703968  580367 profile.go:143] Saving config to /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/download-only-985583/config.json ...
	I1007 12:55:49.704003  580367 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/download-only-985583/config.json: {Name:mk5dbcd6a0d01eb6174b521671061413106517b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1007 12:55:49.704204  580367 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1007 12:55:49.704351  580367 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/18424-574640/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-985583 host does not exist
	  To start a cluster, run: "minikube start -p download-only-985583"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-985583
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1007 12:55:51.583411  580163 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-013254 --alsologtostderr --binary-mirror http://127.0.0.1:36647 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-013254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-013254
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-956205
addons_test.go:934: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-956205: exit status 85 (77.332639ms)

                                                
                                                
-- stdout --
	* Profile "addons-956205" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-956205"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-956205
addons_test.go:945: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-956205: exit status 85 (98.799615ms)

                                                
                                                
-- stdout --
	* Profile "addons-956205" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-956205"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (221.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-956205 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-956205 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m41.071623134s)
--- PASS: TestAddons/Setup (221.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-956205 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-956205 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (10.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-956205 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-956205 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f0023250-1731-4c9c-b7d1-67ca81774e9e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f0023250-1731-4c9c-b7d1-67ca81774e9e] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 10.003937303s
addons_test.go:633: (dbg) Run:  kubectl --context addons-956205 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-956205 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-956205 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-956205 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (10.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 5.338497ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-5dl6n" [6c6e6f59-720f-486b-99d8-a848c9f81e07] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.009965107s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-9kpnr" [145ee917-0ec2-4857-8fe0-d759e2d5ec18] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004667394s
addons_test.go:331: (dbg) Run:  kubectl --context addons-956205 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-956205 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-956205 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.609008603s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 ip
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 addons disable registry --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-956205 addons disable registry --alsologtostderr -v=1: (1.263039263s)
--- PASS: TestAddons/parallel/Registry (18.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-956205 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-956205 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-956205 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9e294a0d-b798-40ad-8a8e-3211e453ff7d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9e294a0d-b798-40ad-8a8e-3211e453ff7d] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004069062s
I1007 13:04:51.195176  580163 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-956205 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-956205 addons disable ingress-dns --alsologtostderr -v=1: (1.562148445s)
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 addons disable ingress --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-956205 addons disable ingress --alsologtostderr -v=1: (7.77606284s)
--- PASS: TestAddons/parallel/Ingress (20.09s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.11s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-sqb4m" [5ba84c87-7d0e-4620-8536-12e725e24e3d] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007818288s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-956205 addons disable inspektor-gadget --alsologtostderr -v=1: (6.104297147s)
--- PASS: TestAddons/parallel/InspektorGadget (11.11s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.704284ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-h8njn" [9b51577a-261f-4581-bc76-12b95938c80c] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003987811s
addons_test.go:402: (dbg) Run:  kubectl --context addons-956205 top pods -n kube-system
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (31.2s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1007 13:04:05.927833  580163 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1007 13:04:05.934296  580163 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1007 13:04:05.934334  580163 kapi.go:107] duration metric: took 9.435472ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.446278ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-956205 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956205 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956205 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-956205 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1d703bd7-d624-4aee-a06f-dd446be515f5] Pending
helpers_test.go:344: "task-pv-pod" [1d703bd7-d624-4aee-a06f-dd446be515f5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1d703bd7-d624-4aee-a06f-dd446be515f5] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003687715s
addons_test.go:511: (dbg) Run:  kubectl --context addons-956205 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-956205 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-956205 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-956205 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-956205 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-956205 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956205 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956205 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-956205 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [8aab1530-c308-48ef-96fa-553d9f2b5bc5] Pending
helpers_test.go:344: "task-pv-pod-restore" [8aab1530-c308-48ef-96fa-553d9f2b5bc5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [8aab1530-c308-48ef-96fa-553d9f2b5bc5] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.006688728s
addons_test.go:553: (dbg) Run:  kubectl --context addons-956205 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-956205 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-956205 delete volumesnapshot new-snapshot-demo
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-956205 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.932796787s)
--- PASS: TestAddons/parallel/CSI (31.20s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-956205 --alsologtostderr -v=1
addons_test.go:743: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-956205 --alsologtostderr -v=1: (1.111428074s)
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-ln9ch" [0b1d8bf2-0f05-473a-9446-9d60f25051ee] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-ln9ch" [0b1d8bf2-0f05-473a-9446-9d60f25051ee] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-ln9ch" [0b1d8bf2-0f05-473a-9446-9d60f25051ee] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-ln9ch" [0b1d8bf2-0f05-473a-9446-9d60f25051ee] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003960356s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 addons disable headlamp --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-956205 addons disable headlamp --alsologtostderr -v=1: (6.178642879s)
--- PASS: TestAddons/parallel/Headlamp (16.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-rpdgz" [9d7e6960-1ab9-40e8-b83b-cfad15920718] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003585716s
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.74s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.25s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:883: (dbg) Run:  kubectl --context addons-956205 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:889: (dbg) Run:  kubectl --context addons-956205 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:893: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956205 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956205 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956205 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956205 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956205 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-956205 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d4928228-2acb-4f5b-9a6f-af675753d5b0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d4928228-2acb-4f5b-9a6f-af675753d5b0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d4928228-2acb-4f5b-9a6f-af675753d5b0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:896: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00407637s
addons_test.go:901: (dbg) Run:  kubectl --context addons-956205 get pvc test-pvc -o=json
addons_test.go:910: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 ssh "cat /opt/local-path-provisioner/pvc-ea796d8a-a681-4184-8cbb-1f11725b9ca4_default_test-pvc/file1"
addons_test.go:922: (dbg) Run:  kubectl --context addons-956205 delete pod test-local-path
addons_test.go:926: (dbg) Run:  kubectl --context addons-956205 delete pvc test-pvc
addons_test.go:979: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:979: (dbg) Done: out/minikube-linux-arm64 -p addons-956205 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.832331515s)
--- PASS: TestAddons/parallel/LocalPath (53.25s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-dfwvg" [33c78e7e-3607-4278-ad34-573034aa90cf] Running
addons_test.go:958: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004669662s
addons_test.go:961: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-956205
2024/10/07 13:03:41 [DEBUG] GET http://192.168.49.2:5000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.85s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-45vd4" [c731ca14-6ddb-4539-b355-0df6d6ad57c1] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004216508s
addons_test.go:973: (dbg) Run:  out/minikube-linux-arm64 -p addons-956205 addons disable yakd --alsologtostderr -v=1
addons_test.go:973: (dbg) Done: out/minikube-linux-arm64 -p addons-956205 addons disable yakd --alsologtostderr -v=1: (5.834241587s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-956205
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-956205: (12.008621326s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-956205
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-956205
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-956205
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (36.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-608723 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E1007 13:42:36.456317  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-608723 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.490144468s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-608723 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-608723 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-608723 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-608723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-608723
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-608723: (1.971545415s)
--- PASS: TestCertOptions (36.11s)

                                                
                                    
x
+
TestCertExpiration (232.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-501751 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-501751 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.239328791s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-501751 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-501751 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.250182012s)
helpers_test.go:175: Cleaning up "cert-expiration-501751" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-501751
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-501751: (2.809045744s)
--- PASS: TestCertExpiration (232.30s)

                                                
                                    
x
+
TestForceSystemdFlag (42.41s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-040234 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-040234 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.590178944s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-040234 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-040234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-040234
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-040234: (2.437479847s)
--- PASS: TestForceSystemdFlag (42.41s)

                                                
                                    
x
+
TestForceSystemdEnv (41.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-622009 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-622009 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.921505014s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-622009 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-622009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-622009
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-622009: (2.280300812s)
--- PASS: TestForceSystemdEnv (41.66s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.63s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-228853 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-228853 --driver=docker  --container-runtime=containerd: (29.923531415s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-228853"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-228853": (1.00427121s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-K4AWhRlHKTsB/agent.602456" SSH_AGENT_PID="602457" DOCKER_HOST=ssh://docker@127.0.0.1:33509 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-K4AWhRlHKTsB/agent.602456" SSH_AGENT_PID="602457" DOCKER_HOST=ssh://docker@127.0.0.1:33509 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-K4AWhRlHKTsB/agent.602456" SSH_AGENT_PID="602457" DOCKER_HOST=ssh://docker@127.0.0.1:33509 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.237662634s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-K4AWhRlHKTsB/agent.602456" SSH_AGENT_PID="602457" DOCKER_HOST=ssh://docker@127.0.0.1:33509 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-228853" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-228853
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-228853: (1.991944794s)
--- PASS: TestDockerEnvContainerd (45.63s)

                                                
                                    
x
+
TestErrorSpam/setup (30.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-578038 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-578038 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-578038 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-578038 --driver=docker  --container-runtime=containerd: (30.750979069s)
--- PASS: TestErrorSpam/setup (30.76s)

                                                
                                    
x
+
TestErrorSpam/start (0.78s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 start --dry-run
--- PASS: TestErrorSpam/start (0.78s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 stop: (1.25949077s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-578038 --log_dir /tmp/nospam-578038 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/18424-574640/.minikube/files/etc/test/nested/copy/580163/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (48.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389582 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-389582 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (48.886379016s)
--- PASS: TestFunctional/serial/StartWithProxy (48.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.29s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1007 13:07:38.386640  580163 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389582 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-389582 --alsologtostderr -v=8: (6.287614768s)
functional_test.go:663: soft start took 6.288993363s for "functional-389582" cluster.
I1007 13:07:44.674590  580163 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.29s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-389582 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-389582 cache add registry.k8s.io/pause:3.1: (1.549247114s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-389582 cache add registry.k8s.io/pause:3.3: (1.406036176s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-389582 cache add registry.k8s.io/pause:latest: (1.210092752s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-389582 /tmp/TestFunctionalserialCacheCmdcacheadd_local3197700433/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 cache add minikube-local-cache-test:functional-389582
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 cache delete minikube-local-cache-test:functional-389582
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-389582
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389582 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (311.597136ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-389582 cache reload: (1.074391074s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 kubectl -- --context functional-389582 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-389582 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.19s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.43s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389582 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-389582 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.426068045s)
functional_test.go:761: restart took 45.426162444s for "functional-389582" cluster.
I1007 13:08:38.583698  580163 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (45.43s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-389582 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-389582 logs: (1.853561235s)
--- PASS: TestFunctional/serial/LogsCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 logs --file /tmp/TestFunctionalserialLogsFileCmd263412451/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-389582 logs --file /tmp/TestFunctionalserialLogsFileCmd263412451/001/logs.txt: (1.909026476s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.57s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-389582 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-389582
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-389582: exit status 115 (407.931018ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31331 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-389582 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-389582 delete -f testdata/invalidsvc.yaml: (1.91472958s)
--- PASS: TestFunctional/serial/InvalidService (5.57s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389582 config get cpus: exit status 14 (98.531375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389582 config get cpus: exit status 14 (96.562998ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-389582 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-389582 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 617196: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.90s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389582 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389582 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (196.187689ms)

                                                
                                                
-- stdout --
	* [functional-389582] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:09:20.754399  616891 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:09:20.754626  616891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:09:20.754658  616891 out.go:358] Setting ErrFile to fd 2...
	I1007 13:09:20.754682  616891 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:09:20.755039  616891 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
	I1007 13:09:20.755529  616891 out.go:352] Setting JSON to false
	I1007 13:09:20.756644  616891 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10310,"bootTime":1728296251,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1007 13:09:20.756752  616891 start.go:139] virtualization:  
	I1007 13:09:20.760114  616891 out.go:177] * [functional-389582] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 13:09:20.762006  616891 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:09:20.762093  616891 notify.go:220] Checking for updates...
	I1007 13:09:20.765802  616891 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:09:20.767502  616891 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 13:09:20.769338  616891 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	I1007 13:09:20.771070  616891 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 13:09:20.772882  616891 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:09:20.775134  616891 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 13:09:20.775693  616891 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:09:20.807301  616891 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 13:09:20.807441  616891 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:09:20.881809  616891 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 13:09:20.872382648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:09:20.881928  616891 docker.go:318] overlay module found
	I1007 13:09:20.884079  616891 out.go:177] * Using the docker driver based on existing profile
	I1007 13:09:20.885767  616891 start.go:297] selected driver: docker
	I1007 13:09:20.885790  616891 start.go:901] validating driver "docker" against &{Name:functional-389582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-389582 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:09:20.885891  616891 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:09:20.888383  616891 out.go:201] 
	W1007 13:09:20.890108  616891 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1007 13:09:20.891944  616891 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389582 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-389582 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-389582 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (199.348277ms)

                                                
                                                
-- stdout --
	* [functional-389582] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:09:20.566482  616845 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:09:20.566629  616845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:09:20.566640  616845 out.go:358] Setting ErrFile to fd 2...
	I1007 13:09:20.566647  616845 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:09:20.567030  616845 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
	I1007 13:09:20.567424  616845 out.go:352] Setting JSON to false
	I1007 13:09:20.568485  616845 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10309,"bootTime":1728296251,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1007 13:09:20.568562  616845 start.go:139] virtualization:  
	I1007 13:09:20.573411  616845 out.go:177] * [functional-389582] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1007 13:09:20.575718  616845 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:09:20.575866  616845 notify.go:220] Checking for updates...
	I1007 13:09:20.582106  616845 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:09:20.584817  616845 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 13:09:20.586870  616845 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	I1007 13:09:20.588892  616845 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 13:09:20.597947  616845 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:09:20.600285  616845 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 13:09:20.600914  616845 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:09:20.625256  616845 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 13:09:20.625380  616845 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:09:20.686172  616845 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 13:09:20.6763858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-n
f-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:09:20.686286  616845 docker.go:318] overlay module found
	I1007 13:09:20.688262  616845 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1007 13:09:20.690014  616845 start.go:297] selected driver: docker
	I1007 13:09:20.690035  616845 start.go:901] validating driver "docker" against &{Name:functional-389582 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-389582 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1007 13:09:20.690155  616845 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:09:20.692526  616845 out.go:201] 
	W1007 13:09:20.694149  616845 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1007 13:09:20.695741  616845 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-389582 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-389582 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-nwnjq" [464857d2-4688-4408-88ac-0b3459fc1d6b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-nwnjq" [464857d2-4688-4408-88ac-0b3459fc1d6b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004882637s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30556
functional_test.go:1675: http://192.168.49.2:30556: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-nwnjq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30556
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3417378e-5dd8-43cb-b9fd-005315d7f688] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004799351s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-389582 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-389582 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-389582 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-389582 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [721d9a6e-b190-4f04-a317-687c6e730ad2] Pending
helpers_test.go:344: "sp-pod" [721d9a6e-b190-4f04-a317-687c6e730ad2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [721d9a6e-b190-4f04-a317-687c6e730ad2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003181193s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-389582 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-389582 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-389582 delete -f testdata/storage-provisioner/pod.yaml: (1.11891681s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-389582 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [512d9703-06b9-460f-bda3-49c2624f8489] Pending
helpers_test.go:344: "sp-pod" [512d9703-06b9-460f-bda3-49c2624f8489] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003726304s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-389582 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh -n functional-389582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 cp functional-389582:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1437545341/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh -n functional-389582 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh -n functional-389582 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/580163/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "sudo cat /etc/test/nested/copy/580163/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/580163.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "sudo cat /etc/ssl/certs/580163.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/580163.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "sudo cat /usr/share/ca-certificates/580163.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/5801632.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "sudo cat /etc/ssl/certs/5801632.pem"
E1007 13:09:33.470239  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:09:33.551511  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:09:33.713223  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/5801632.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "sudo cat /usr/share/ca-certificates/5801632.pem"
E1007 13:09:34.035464  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-389582 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389582 ssh "sudo systemctl is-active docker": exit status 1 (299.817298ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389582 ssh "sudo systemctl is-active crio": exit status 1 (277.871491ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-389582 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-389582 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-389582 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-389582 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 614342: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-389582 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-389582 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [255e38fa-2fe1-4061-9c3e-42a1632c0101] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [255e38fa-2fe1-4061-9c3e-42a1632c0101] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004590749s
I1007 13:08:58.245234  580163 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-389582 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.92.50 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-389582 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-389582 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-389582 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-zzx7h" [2b079da1-1428-4200-a1d4-91c4cc899b05] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-zzx7h" [2b079da1-1428-4200-a1d4-91c4cc899b05] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004812739s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "456.751253ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "75.687846ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 service list -o json
functional_test.go:1494: Took "597.419256ms" to run "out/minikube-linux-arm64 -p functional-389582 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "460.968543ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "95.298558ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31245
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389582 /tmp/TestFunctionalparallelMountCmdany-port135372761/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728306558096469516" to /tmp/TestFunctionalparallelMountCmdany-port135372761/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728306558096469516" to /tmp/TestFunctionalparallelMountCmdany-port135372761/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728306558096469516" to /tmp/TestFunctionalparallelMountCmdany-port135372761/001/test-1728306558096469516
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  7 13:09 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  7 13:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  7 13:09 test-1728306558096469516
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh cat /mount-9p/test-1728306558096469516
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-389582 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [e1c9311a-25f7-4e02-9482-950ab844b0ea] Pending
helpers_test.go:344: "busybox-mount" [e1c9311a-25f7-4e02-9482-950ab844b0ea] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [e1c9311a-25f7-4e02-9482-950ab844b0ea] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [e1c9311a-25f7-4e02-9482-950ab844b0ea] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.00352466s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-389582 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389582 /tmp/TestFunctionalparallelMountCmdany-port135372761/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31245
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389582 /tmp/TestFunctionalparallelMountCmdspecific-port913940795/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389582 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (539.264555ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1007 13:09:25.308340  580163 retry.go:31] will retry after 428.099758ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389582 /tmp/TestFunctionalparallelMountCmdspecific-port913940795/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389582 ssh "sudo umount -f /mount-9p": exit status 1 (319.76631ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-389582 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389582 /tmp/TestFunctionalparallelMountCmdspecific-port913940795/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389582 /tmp/TestFunctionalparallelMountCmdVerifyCleanup780291721/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389582 /tmp/TestFunctionalparallelMountCmdVerifyCleanup780291721/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-389582 /tmp/TestFunctionalparallelMountCmdVerifyCleanup780291721/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-389582 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389582 /tmp/TestFunctionalparallelMountCmdVerifyCleanup780291721/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389582 /tmp/TestFunctionalparallelMountCmdVerifyCleanup780291721/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-389582 /tmp/TestFunctionalparallelMountCmdVerifyCleanup780291721/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 version -o=json --components
E1007 13:09:34.677100  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-389582 version -o=json --components: (1.423604667s)
--- PASS: TestFunctional/parallel/Version/components (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389582 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-389582
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-389582
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389582 image ls --format short --alsologtostderr:
I1007 13:09:37.213762  619717 out.go:345] Setting OutFile to fd 1 ...
I1007 13:09:37.213931  619717 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:09:37.213940  619717 out.go:358] Setting ErrFile to fd 2...
I1007 13:09:37.213946  619717 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:09:37.214218  619717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
I1007 13:09:37.215002  619717 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 13:09:37.215198  619717 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 13:09:37.215804  619717 cli_runner.go:164] Run: docker container inspect functional-389582 --format={{.State.Status}}
I1007 13:09:37.246245  619717 ssh_runner.go:195] Run: systemctl --version
I1007 13:09:37.246297  619717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389582
I1007 13:09:37.267422  619717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/functional-389582/id_rsa Username:docker}
I1007 13:09:37.374910  619717 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389582 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/nginx                     | alpine             | sha256:577a23 | 21.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kicbase/echo-server               | functional-389582  | sha256:ce2d2c | 2.17MB |
| docker.io/library/minikube-local-cache-test | functional-389582  | sha256:6b7205 | 992B   |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | latest             | sha256:048e09 | 69.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389582 image ls --format table --alsologtostderr:
I1007 13:09:37.554728  619790 out.go:345] Setting OutFile to fd 1 ...
I1007 13:09:37.554949  619790 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:09:37.554976  619790 out.go:358] Setting ErrFile to fd 2...
I1007 13:09:37.554997  619790 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:09:37.555315  619790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
I1007 13:09:37.556522  619790 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 13:09:37.557959  619790 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 13:09:37.558581  619790 cli_runner.go:164] Run: docker container inspect functional-389582 --format={{.State.Status}}
I1007 13:09:37.588745  619790 ssh_runner.go:195] Run: systemctl --version
I1007 13:09:37.588808  619790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389582
I1007 13:09:37.610028  619790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/functional-389582/id_rsa Username:docker}
I1007 13:09:37.707297  619790 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389582 image ls --format json --alsologtostderr:
[{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"si
ze":"16948420"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-389582"],"size":"2173567"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1
"],"size":"18507674"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:6b7205e4e22e5125aacf95e5f69ed85c768044da53b2d5c8e4c2da0be3177b4c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-389582"],"size":"992"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b
00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250"],"repoTags":["docker.io/library/nginx:alpine"],"size":"21533923"},{"id":"sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":["docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"69600401"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registr
y.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389582 image ls --format json --alsologtostderr:
I1007 13:09:37.510819  619785 out.go:345] Setting OutFile to fd 1 ...
I1007 13:09:37.511004  619785 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:09:37.511016  619785 out.go:358] Setting ErrFile to fd 2...
I1007 13:09:37.511028  619785 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:09:37.511350  619785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
I1007 13:09:37.512335  619785 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 13:09:37.512471  619785 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 13:09:37.513012  619785 cli_runner.go:164] Run: docker container inspect functional-389582 --format={{.State.Status}}
I1007 13:09:37.534314  619785 ssh_runner.go:195] Run: systemctl --version
I1007 13:09:37.534382  619785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389582
I1007 13:09:37.566505  619785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/functional-389582/id_rsa Username:docker}
I1007 13:09:37.663088  619785 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-389582 image ls --format yaml --alsologtostderr:
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-389582
size: "2173567"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
repoTags:
- docker.io/library/nginx:alpine
size: "21533923"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:6b7205e4e22e5125aacf95e5f69ed85c768044da53b2d5c8e4c2da0be3177b4c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-389582
size: "992"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "69600401"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389582 image ls --format yaml --alsologtostderr:
I1007 13:09:37.237302  619718 out.go:345] Setting OutFile to fd 1 ...
I1007 13:09:37.239103  619718 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:09:37.239162  619718 out.go:358] Setting ErrFile to fd 2...
I1007 13:09:37.239184  619718 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:09:37.239465  619718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
I1007 13:09:37.240822  619718 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 13:09:37.241004  619718 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 13:09:37.241506  619718 cli_runner.go:164] Run: docker container inspect functional-389582 --format={{.State.Status}}
I1007 13:09:37.265870  619718 ssh_runner.go:195] Run: systemctl --version
I1007 13:09:37.265932  619718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389582
I1007 13:09:37.302533  619718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/functional-389582/id_rsa Username:docker}
I1007 13:09:37.410394  619718 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-389582 ssh pgrep buildkitd: exit status 1 (303.012802ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image build -t localhost/my-image:functional-389582 testdata/build --alsologtostderr
E1007 13:09:38.520660  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-389582 image build -t localhost/my-image:functional-389582 testdata/build --alsologtostderr: (3.168439856s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-389582 image build -t localhost/my-image:functional-389582 testdata/build --alsologtostderr:
I1007 13:09:38.076790  619906 out.go:345] Setting OutFile to fd 1 ...
I1007 13:09:38.077457  619906 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:09:38.077488  619906 out.go:358] Setting ErrFile to fd 2...
I1007 13:09:38.077496  619906 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1007 13:09:38.078401  619906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
I1007 13:09:38.079210  619906 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 13:09:38.081011  619906 config.go:182] Loaded profile config "functional-389582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1007 13:09:38.081643  619906 cli_runner.go:164] Run: docker container inspect functional-389582 --format={{.State.Status}}
I1007 13:09:38.100197  619906 ssh_runner.go:195] Run: systemctl --version
I1007 13:09:38.100267  619906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-389582
I1007 13:09:38.118494  619906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33519 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/functional-389582/id_rsa Username:docker}
I1007 13:09:38.210406  619906 build_images.go:161] Building image from path: /tmp/build.2457190865.tar
I1007 13:09:38.210480  619906 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1007 13:09:38.219853  619906 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2457190865.tar
I1007 13:09:38.223625  619906 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2457190865.tar: stat -c "%s %y" /var/lib/minikube/build/build.2457190865.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2457190865.tar': No such file or directory
I1007 13:09:38.223663  619906 ssh_runner.go:362] scp /tmp/build.2457190865.tar --> /var/lib/minikube/build/build.2457190865.tar (3072 bytes)
I1007 13:09:38.251787  619906 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2457190865
I1007 13:09:38.261344  619906 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2457190865 -xf /var/lib/minikube/build/build.2457190865.tar
I1007 13:09:38.272012  619906 containerd.go:394] Building image: /var/lib/minikube/build/build.2457190865
I1007 13:09:38.272144  619906 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2457190865 --local dockerfile=/var/lib/minikube/build/build.2457190865 --output type=image,name=localhost/my-image:functional-389582
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.3s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:1a467a701704979c198d1204a78aedb2f9cd5387c027c3630e611370e4207aad
#8 exporting manifest sha256:1a467a701704979c198d1204a78aedb2f9cd5387c027c3630e611370e4207aad 0.0s done
#8 exporting config sha256:57549a0f7f57a950be3e7cb3757e6c2f79b1e8db52d45a32604136773930bcbe 0.0s done
#8 naming to localhost/my-image:functional-389582 done
#8 DONE 0.1s
I1007 13:09:41.160254  619906 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.2457190865 --local dockerfile=/var/lib/minikube/build/build.2457190865 --output type=image,name=localhost/my-image:functional-389582: (2.888047398s)
I1007 13:09:41.160333  619906 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2457190865
I1007 13:09:41.171104  619906 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2457190865.tar
I1007 13:09:41.183046  619906 build_images.go:217] Built localhost/my-image:functional-389582 from /tmp/build.2457190865.tar
I1007 13:09:41.183079  619906 build_images.go:133] succeeded building to: functional-389582
I1007 13:09:41.183086  619906 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-389582
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image load --daemon kicbase/echo-server:functional-389582 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image load --daemon kicbase/echo-server:functional-389582 --alsologtostderr
2024/10/07 13:09:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-389582 image load --daemon kicbase/echo-server:functional-389582 --alsologtostderr: (1.041506279s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-389582
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image load --daemon kicbase/echo-server:functional-389582 --alsologtostderr
E1007 13:09:33.382723  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:09:33.389049  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:09:33.400671  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:09:33.423102  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-389582 image load --daemon kicbase/echo-server:functional-389582 --alsologtostderr: (1.091140499s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image save kicbase/echo-server:functional-389582 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image rm kicbase/echo-server:functional-389582 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
E1007 13:09:35.958417  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-389582
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-389582 image save --daemon kicbase/echo-server:functional-389582 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-389582
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-389582
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-389582
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-389582
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (127.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-701715 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1007 13:09:53.883477  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:10:14.365071  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:10:55.327462  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-701715 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m6.803325569s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (127.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- rollout status deployment/busybox
E1007 13:12:17.249632  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-701715 -- rollout status deployment/busybox: (29.392587064s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-gqnjd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-j2q2r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-r6t6s -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-gqnjd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-j2q2r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-r6t6s -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-gqnjd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-j2q2r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-r6t6s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-gqnjd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-gqnjd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-j2q2r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-j2q2r -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-r6t6s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-701715 -- exec busybox-7dff88458-r6t6s -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-701715 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-701715 -v=7 --alsologtostderr: (19.694762777s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr: (1.026924639s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-701715 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.020046079s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-701715 status --output json -v=7 --alsologtostderr: (1.061185193s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp testdata/cp-test.txt ha-701715:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3910796946/001/cp-test_ha-701715.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715:/home/docker/cp-test.txt ha-701715-m02:/home/docker/cp-test_ha-701715_ha-701715-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m02 "sudo cat /home/docker/cp-test_ha-701715_ha-701715-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715:/home/docker/cp-test.txt ha-701715-m03:/home/docker/cp-test_ha-701715_ha-701715-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m03 "sudo cat /home/docker/cp-test_ha-701715_ha-701715-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715:/home/docker/cp-test.txt ha-701715-m04:/home/docker/cp-test_ha-701715_ha-701715-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m04 "sudo cat /home/docker/cp-test_ha-701715_ha-701715-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp testdata/cp-test.txt ha-701715-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3910796946/001/cp-test_ha-701715-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715-m02:/home/docker/cp-test.txt ha-701715:/home/docker/cp-test_ha-701715-m02_ha-701715.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715 "sudo cat /home/docker/cp-test_ha-701715-m02_ha-701715.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715-m02:/home/docker/cp-test.txt ha-701715-m03:/home/docker/cp-test_ha-701715-m02_ha-701715-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m03 "sudo cat /home/docker/cp-test_ha-701715-m02_ha-701715-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715-m02:/home/docker/cp-test.txt ha-701715-m04:/home/docker/cp-test_ha-701715-m02_ha-701715-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m04 "sudo cat /home/docker/cp-test_ha-701715-m02_ha-701715-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp testdata/cp-test.txt ha-701715-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3910796946/001/cp-test_ha-701715-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715-m03:/home/docker/cp-test.txt ha-701715:/home/docker/cp-test_ha-701715-m03_ha-701715.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715 "sudo cat /home/docker/cp-test_ha-701715-m03_ha-701715.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715-m03:/home/docker/cp-test.txt ha-701715-m02:/home/docker/cp-test_ha-701715-m03_ha-701715-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m02 "sudo cat /home/docker/cp-test_ha-701715-m03_ha-701715-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715-m03:/home/docker/cp-test.txt ha-701715-m04:/home/docker/cp-test_ha-701715-m03_ha-701715-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m04 "sudo cat /home/docker/cp-test_ha-701715-m03_ha-701715-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp testdata/cp-test.txt ha-701715-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3910796946/001/cp-test_ha-701715-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715-m04:/home/docker/cp-test.txt ha-701715:/home/docker/cp-test_ha-701715-m04_ha-701715.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715 "sudo cat /home/docker/cp-test_ha-701715-m04_ha-701715.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715-m04:/home/docker/cp-test.txt ha-701715-m02:/home/docker/cp-test_ha-701715-m04_ha-701715-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m02 "sudo cat /home/docker/cp-test_ha-701715-m04_ha-701715-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 cp ha-701715-m04:/home/docker/cp-test.txt ha-701715-m03:/home/docker/cp-test_ha-701715-m04_ha-701715-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 ssh -n ha-701715-m03 "sudo cat /home/docker/cp-test_ha-701715-m04_ha-701715-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-701715 node stop m02 -v=7 --alsologtostderr: (12.13528206s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr: exit status 7 (821.31886ms)

                                                
                                                
-- stdout --
	ha-701715
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-701715-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-701715-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-701715-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:13:19.909146  636109 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:13:19.909332  636109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:13:19.909363  636109 out.go:358] Setting ErrFile to fd 2...
	I1007 13:13:19.909384  636109 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:13:19.909748  636109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
	I1007 13:13:19.909978  636109 out.go:352] Setting JSON to false
	I1007 13:13:19.910035  636109 mustload.go:65] Loading cluster: ha-701715
	I1007 13:13:19.910074  636109 notify.go:220] Checking for updates...
	I1007 13:13:19.910511  636109 config.go:182] Loaded profile config "ha-701715": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 13:13:19.910558  636109 status.go:174] checking status of ha-701715 ...
	I1007 13:13:19.911182  636109 cli_runner.go:164] Run: docker container inspect ha-701715 --format={{.State.Status}}
	I1007 13:13:19.936908  636109 status.go:371] ha-701715 host status = "Running" (err=<nil>)
	I1007 13:13:19.936934  636109 host.go:66] Checking if "ha-701715" exists ...
	I1007 13:13:19.937241  636109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-701715
	I1007 13:13:19.971676  636109 host.go:66] Checking if "ha-701715" exists ...
	I1007 13:13:19.972000  636109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:13:19.972043  636109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-701715
	I1007 13:13:19.994940  636109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33524 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/ha-701715/id_rsa Username:docker}
	I1007 13:13:20.116220  636109 ssh_runner.go:195] Run: systemctl --version
	I1007 13:13:20.121502  636109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:13:20.135461  636109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:13:20.206327  636109 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-07 13:13:20.194642478 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:13:20.206913  636109 kubeconfig.go:125] found "ha-701715" server: "https://192.168.49.254:8443"
	I1007 13:13:20.206955  636109 api_server.go:166] Checking apiserver status ...
	I1007 13:13:20.207007  636109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:13:20.219591  636109 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1481/cgroup
	I1007 13:13:20.228990  636109 api_server.go:182] apiserver freezer: "9:freezer:/docker/c0d91a63d2ed224c9a3b74fa60a6e582d2d75dc0e2294fbf2594a765ceedc4c3/kubepods/burstable/pod83782268b487cff429f31521281d3ec6/e93b64bc30a5439dc7464be148dac62b9e11a6c0eca7273701378ee94d965f72"
	I1007 13:13:20.229065  636109 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c0d91a63d2ed224c9a3b74fa60a6e582d2d75dc0e2294fbf2594a765ceedc4c3/kubepods/burstable/pod83782268b487cff429f31521281d3ec6/e93b64bc30a5439dc7464be148dac62b9e11a6c0eca7273701378ee94d965f72/freezer.state
	I1007 13:13:20.238362  636109 api_server.go:204] freezer state: "THAWED"
	I1007 13:13:20.238405  636109 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1007 13:13:20.247631  636109 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1007 13:13:20.247663  636109 status.go:463] ha-701715 apiserver status = Running (err=<nil>)
	I1007 13:13:20.247676  636109 status.go:176] ha-701715 status: &{Name:ha-701715 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:13:20.247737  636109 status.go:174] checking status of ha-701715-m02 ...
	I1007 13:13:20.248073  636109 cli_runner.go:164] Run: docker container inspect ha-701715-m02 --format={{.State.Status}}
	I1007 13:13:20.264602  636109 status.go:371] ha-701715-m02 host status = "Stopped" (err=<nil>)
	I1007 13:13:20.264624  636109 status.go:384] host is not running, skipping remaining checks
	I1007 13:13:20.264633  636109 status.go:176] ha-701715-m02 status: &{Name:ha-701715-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:13:20.264654  636109 status.go:174] checking status of ha-701715-m03 ...
	I1007 13:13:20.264970  636109 cli_runner.go:164] Run: docker container inspect ha-701715-m03 --format={{.State.Status}}
	I1007 13:13:20.282547  636109 status.go:371] ha-701715-m03 host status = "Running" (err=<nil>)
	I1007 13:13:20.282575  636109 host.go:66] Checking if "ha-701715-m03" exists ...
	I1007 13:13:20.282888  636109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-701715-m03
	I1007 13:13:20.299691  636109 host.go:66] Checking if "ha-701715-m03" exists ...
	I1007 13:13:20.300001  636109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:13:20.300051  636109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-701715-m03
	I1007 13:13:20.318172  636109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33534 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/ha-701715-m03/id_rsa Username:docker}
	I1007 13:13:20.414809  636109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:13:20.429804  636109 kubeconfig.go:125] found "ha-701715" server: "https://192.168.49.254:8443"
	I1007 13:13:20.429835  636109 api_server.go:166] Checking apiserver status ...
	I1007 13:13:20.429903  636109 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:13:20.450646  636109 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1299/cgroup
	I1007 13:13:20.466865  636109 api_server.go:182] apiserver freezer: "9:freezer:/docker/fa8168e109aa452f629fbb2f9fb0a7866fc1b5f04377e0e0c9fb2ab522341d21/kubepods/burstable/podf3978ead4d1c3c7fc9057d290444e144/a329936b810db25687b1bed6860acdb209534258308398171cd674d7ca6a1fc3"
	I1007 13:13:20.466991  636109 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fa8168e109aa452f629fbb2f9fb0a7866fc1b5f04377e0e0c9fb2ab522341d21/kubepods/burstable/podf3978ead4d1c3c7fc9057d290444e144/a329936b810db25687b1bed6860acdb209534258308398171cd674d7ca6a1fc3/freezer.state
	I1007 13:13:20.477100  636109 api_server.go:204] freezer state: "THAWED"
	I1007 13:13:20.477139  636109 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1007 13:13:20.485203  636109 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1007 13:13:20.485235  636109 status.go:463] ha-701715-m03 apiserver status = Running (err=<nil>)
	I1007 13:13:20.485246  636109 status.go:176] ha-701715-m03 status: &{Name:ha-701715-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:13:20.485297  636109 status.go:174] checking status of ha-701715-m04 ...
	I1007 13:13:20.485621  636109 cli_runner.go:164] Run: docker container inspect ha-701715-m04 --format={{.State.Status}}
	I1007 13:13:20.503681  636109 status.go:371] ha-701715-m04 host status = "Running" (err=<nil>)
	I1007 13:13:20.503717  636109 host.go:66] Checking if "ha-701715-m04" exists ...
	I1007 13:13:20.504018  636109 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-701715-m04
	I1007 13:13:20.522834  636109 host.go:66] Checking if "ha-701715-m04" exists ...
	I1007 13:13:20.523365  636109 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:13:20.523522  636109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-701715-m04
	I1007 13:13:20.544353  636109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33539 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/ha-701715-m04/id_rsa Username:docker}
	I1007 13:13:20.644018  636109 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:13:20.658323  636109 status.go:176] ha-701715-m04 status: &{Name:ha-701715-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (19.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-701715 node start m02 -v=7 --alsologtostderr: (18.237429548s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr: (1.110713892s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (19.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.024158039s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (152.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-701715 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-701715 -v=7 --alsologtostderr
E1007 13:13:49.807844  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:13:49.814541  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:13:49.825903  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:13:49.847392  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:13:49.888781  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:13:49.970217  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:13:50.131817  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:13:50.453574  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:13:51.095608  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:13:52.377309  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:13:54.939324  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:14:00.062162  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:14:10.312504  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-701715 -v=7 --alsologtostderr: (37.730411268s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-701715 --wait=true -v=7 --alsologtostderr
E1007 13:14:30.794845  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:14:33.380299  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:15:01.092007  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:15:11.756823  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-701715 --wait=true -v=7 --alsologtostderr: (1m54.672105743s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-701715
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (152.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-701715 node delete m03 -v=7 --alsologtostderr: (9.777607565s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 stop -v=7 --alsologtostderr
E1007 13:16:33.678154  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-701715 stop -v=7 --alsologtostderr: (36.118630728s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr: exit status 7 (145.345228ms)

                                                
                                                
-- stdout --
	ha-701715
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-701715-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-701715-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:17:02.200273  650495 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:17:02.200684  650495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:17:02.200706  650495 out.go:358] Setting ErrFile to fd 2...
	I1007 13:17:02.200713  650495 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:17:02.201038  650495 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
	I1007 13:17:02.201266  650495 out.go:352] Setting JSON to false
	I1007 13:17:02.201294  650495 mustload.go:65] Loading cluster: ha-701715
	I1007 13:17:02.201339  650495 notify.go:220] Checking for updates...
	I1007 13:17:02.201808  650495 config.go:182] Loaded profile config "ha-701715": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 13:17:02.201834  650495 status.go:174] checking status of ha-701715 ...
	I1007 13:17:02.202433  650495 cli_runner.go:164] Run: docker container inspect ha-701715 --format={{.State.Status}}
	I1007 13:17:02.223947  650495 status.go:371] ha-701715 host status = "Stopped" (err=<nil>)
	I1007 13:17:02.223975  650495 status.go:384] host is not running, skipping remaining checks
	I1007 13:17:02.223982  650495 status.go:176] ha-701715 status: &{Name:ha-701715 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:17:02.224015  650495 status.go:174] checking status of ha-701715-m02 ...
	I1007 13:17:02.224827  650495 cli_runner.go:164] Run: docker container inspect ha-701715-m02 --format={{.State.Status}}
	I1007 13:17:02.254517  650495 status.go:371] ha-701715-m02 host status = "Stopped" (err=<nil>)
	I1007 13:17:02.254543  650495 status.go:384] host is not running, skipping remaining checks
	I1007 13:17:02.254550  650495 status.go:176] ha-701715-m02 status: &{Name:ha-701715-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:17:02.254575  650495 status.go:174] checking status of ha-701715-m04 ...
	I1007 13:17:02.254911  650495 cli_runner.go:164] Run: docker container inspect ha-701715-m04 --format={{.State.Status}}
	I1007 13:17:02.274482  650495 status.go:371] ha-701715-m04 host status = "Stopped" (err=<nil>)
	I1007 13:17:02.274506  650495 status.go:384] host is not running, skipping remaining checks
	I1007 13:17:02.274514  650495 status.go:176] ha-701715-m04 status: &{Name:ha-701715-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (64.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-701715 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-701715 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m3.762198001s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (64.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-701715 --control-plane -v=7 --alsologtostderr
E1007 13:18:49.806194  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-701715 --control-plane -v=7 --alsologtostderr: (45.058469702s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-701715 status -v=7 --alsologtostderr: (1.002922226s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.064182632s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (47.33s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-941081 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1007 13:19:17.519506  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:19:33.380992  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-941081 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (47.327236097s)
--- PASS: TestJSONOutput/start/Command (47.33s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-941081 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-941081 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-941081 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-941081 --output=json --user=testUser: (5.767816859s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-971899 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-971899 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (96.488927ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6e5bd497-cd24-41dc-a501-1bcc7786435a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-971899] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5607e98d-c02b-4e5f-860f-6326af81f064","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18424"}}
	{"specversion":"1.0","id":"a475e943-f52f-4593-a8e0-db5049e6ae45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"27035cba-31b1-477b-86cc-5a5f442a78b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig"}}
	{"specversion":"1.0","id":"d8515d9c-a420-42e4-9074-de23f7119070","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube"}}
	{"specversion":"1.0","id":"72967950-885e-4881-b1cd-1cef09383bac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"178e91a4-58ec-41ff-adb6-cee340331ffa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"92aad4ec-f012-4f0b-866c-0ee534e4d803","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-971899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-971899
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-878853 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-878853 --network=: (38.724070946s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-878853" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-878853
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-878853: (2.175800158s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.44s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-285561 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-285561 --network=bridge: (31.372690271s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-285561" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-285561
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-285561: (2.040708893s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.44s)

                                                
                                    
x
+
TestKicExistingNetwork (35.07s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1007 13:21:16.578750  580163 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1007 13:21:16.597976  580163 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1007 13:21:16.598068  580163 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1007 13:21:16.598086  580163 cli_runner.go:164] Run: docker network inspect existing-network
W1007 13:21:16.616790  580163 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1007 13:21:16.616823  580163 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1007 13:21:16.616837  580163 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1007 13:21:16.617038  580163 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1007 13:21:16.635646  580163 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d354a8ce15ea IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c6:9a:74:b0} reservation:<nil>}
I1007 13:21:16.635991  580163 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b68620}
I1007 13:21:16.636012  580163 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1007 13:21:16.636064  580163 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1007 13:21:16.709897  580163 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-926580 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-926580 --network=existing-network: (32.989746374s)
helpers_test.go:175: Cleaning up "existing-network-926580" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-926580
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-926580: (1.916052479s)
I1007 13:21:51.631863  580163 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.07s)

                                                
                                    
x
+
TestKicCustomSubnet (33.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-827277 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-827277 --subnet=192.168.60.0/24: (31.410141687s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-827277 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-827277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-827277
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-827277: (2.034002143s)
--- PASS: TestKicCustomSubnet (33.46s)

                                                
                                    
x
+
TestKicStaticIP (31.74s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-611299 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-611299 --static-ip=192.168.200.200: (29.440229451s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-611299 ip
helpers_test.go:175: Cleaning up "static-ip-611299" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-611299
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-611299: (2.134830244s)
--- PASS: TestKicStaticIP (31.74s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (67.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-684447 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-684447 --driver=docker  --container-runtime=containerd: (31.384933593s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-687425 --driver=docker  --container-runtime=containerd
E1007 13:23:49.807879  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-687425 --driver=docker  --container-runtime=containerd: (30.008736423s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-684447
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-687425
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-687425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-687425
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-687425: (2.365034949s)
helpers_test.go:175: Cleaning up "first-684447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-684447
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-684447: (2.25061006s)
--- PASS: TestMinikubeProfile (67.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-838220 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-838220 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.143306681s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-838220 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.06s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-840345 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-840345 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.057070682s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-840345 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-838220 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-838220 --alsologtostderr -v=5: (1.622551374s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-840345 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-840345
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-840345: (1.22004055s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-840345
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-840345: (6.428338819s)
E1007 13:24:33.380181  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/RestartStopped (7.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-840345 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (64.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-083522 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-083522 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m4.20538781s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (64.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- rollout status deployment/busybox
E1007 13:25:56.453855  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-083522 -- rollout status deployment/busybox: (16.165643821s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- exec busybox-7dff88458-6qdbj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- exec busybox-7dff88458-tsgds -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- exec busybox-7dff88458-6qdbj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- exec busybox-7dff88458-tsgds -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- exec busybox-7dff88458-6qdbj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- exec busybox-7dff88458-tsgds -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.07s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- exec busybox-7dff88458-6qdbj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- exec busybox-7dff88458-6qdbj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- exec busybox-7dff88458-tsgds -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-083522 -- exec busybox-7dff88458-tsgds -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-083522 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-083522 -v 3 --alsologtostderr: (17.493843499s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.17s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-083522 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 cp testdata/cp-test.txt multinode-083522:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 cp multinode-083522:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2066097389/001/cp-test_multinode-083522.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 cp multinode-083522:/home/docker/cp-test.txt multinode-083522-m02:/home/docker/cp-test_multinode-083522_multinode-083522-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522-m02 "sudo cat /home/docker/cp-test_multinode-083522_multinode-083522-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 cp multinode-083522:/home/docker/cp-test.txt multinode-083522-m03:/home/docker/cp-test_multinode-083522_multinode-083522-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522-m03 "sudo cat /home/docker/cp-test_multinode-083522_multinode-083522-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 cp testdata/cp-test.txt multinode-083522-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 cp multinode-083522-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2066097389/001/cp-test_multinode-083522-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 cp multinode-083522-m02:/home/docker/cp-test.txt multinode-083522:/home/docker/cp-test_multinode-083522-m02_multinode-083522.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522 "sudo cat /home/docker/cp-test_multinode-083522-m02_multinode-083522.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 cp multinode-083522-m02:/home/docker/cp-test.txt multinode-083522-m03:/home/docker/cp-test_multinode-083522-m02_multinode-083522-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522-m03 "sudo cat /home/docker/cp-test_multinode-083522-m02_multinode-083522-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 cp testdata/cp-test.txt multinode-083522-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 cp multinode-083522-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2066097389/001/cp-test_multinode-083522-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 cp multinode-083522-m03:/home/docker/cp-test.txt multinode-083522:/home/docker/cp-test_multinode-083522-m03_multinode-083522.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522 "sudo cat /home/docker/cp-test_multinode-083522-m03_multinode-083522.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 cp multinode-083522-m03:/home/docker/cp-test.txt multinode-083522-m02:/home/docker/cp-test_multinode-083522-m03_multinode-083522-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 ssh -n multinode-083522-m02 "sudo cat /home/docker/cp-test_multinode-083522-m03_multinode-083522-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-083522 node stop m03: (1.235306055s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-083522 status: exit status 7 (585.687024ms)

                                                
                                                
-- stdout --
	multinode-083522
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-083522-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-083522-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-083522 status --alsologtostderr: exit status 7 (519.986159ms)

                                                
                                                
-- stdout --
	multinode-083522
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-083522-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-083522-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:26:30.586570  703838 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:26:30.586770  703838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:26:30.586801  703838 out.go:358] Setting ErrFile to fd 2...
	I1007 13:26:30.586822  703838 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:26:30.587237  703838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
	I1007 13:26:30.587508  703838 out.go:352] Setting JSON to false
	I1007 13:26:30.587566  703838 mustload.go:65] Loading cluster: multinode-083522
	I1007 13:26:30.588694  703838 config.go:182] Loaded profile config "multinode-083522": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 13:26:30.588763  703838 status.go:174] checking status of multinode-083522 ...
	I1007 13:26:30.588337  703838 notify.go:220] Checking for updates...
	I1007 13:26:30.589369  703838 cli_runner.go:164] Run: docker container inspect multinode-083522 --format={{.State.Status}}
	I1007 13:26:30.608258  703838 status.go:371] multinode-083522 host status = "Running" (err=<nil>)
	I1007 13:26:30.608292  703838 host.go:66] Checking if "multinode-083522" exists ...
	I1007 13:26:30.608589  703838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-083522
	I1007 13:26:30.633405  703838 host.go:66] Checking if "multinode-083522" exists ...
	I1007 13:26:30.633804  703838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:26:30.633869  703838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-083522
	I1007 13:26:30.652893  703838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33644 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/multinode-083522/id_rsa Username:docker}
	I1007 13:26:30.746891  703838 ssh_runner.go:195] Run: systemctl --version
	I1007 13:26:30.751193  703838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:26:30.762805  703838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:26:30.825276  703838 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-07 13:26:30.815123693 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:26:30.825905  703838 kubeconfig.go:125] found "multinode-083522" server: "https://192.168.67.2:8443"
	I1007 13:26:30.825944  703838 api_server.go:166] Checking apiserver status ...
	I1007 13:26:30.825996  703838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1007 13:26:30.837487  703838 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1389/cgroup
	I1007 13:26:30.847555  703838 api_server.go:182] apiserver freezer: "9:freezer:/docker/2a4c4538b1390e3b8ff32c2f23d82459b120ffffb97ecf28cf258870203b4946/kubepods/burstable/pod8676c53051561ac64d083ed8c5e0d90b/df948e4b31beca826f3c92d32df30edcb158f13f910845d5c0ce928779919862"
	I1007 13:26:30.847637  703838 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2a4c4538b1390e3b8ff32c2f23d82459b120ffffb97ecf28cf258870203b4946/kubepods/burstable/pod8676c53051561ac64d083ed8c5e0d90b/df948e4b31beca826f3c92d32df30edcb158f13f910845d5c0ce928779919862/freezer.state
	I1007 13:26:30.856981  703838 api_server.go:204] freezer state: "THAWED"
	I1007 13:26:30.857010  703838 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1007 13:26:30.865089  703838 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1007 13:26:30.865119  703838 status.go:463] multinode-083522 apiserver status = Running (err=<nil>)
	I1007 13:26:30.865130  703838 status.go:176] multinode-083522 status: &{Name:multinode-083522 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:26:30.865155  703838 status.go:174] checking status of multinode-083522-m02 ...
	I1007 13:26:30.865457  703838 cli_runner.go:164] Run: docker container inspect multinode-083522-m02 --format={{.State.Status}}
	I1007 13:26:30.883424  703838 status.go:371] multinode-083522-m02 host status = "Running" (err=<nil>)
	I1007 13:26:30.883448  703838 host.go:66] Checking if "multinode-083522-m02" exists ...
	I1007 13:26:30.883751  703838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-083522-m02
	I1007 13:26:30.900661  703838 host.go:66] Checking if "multinode-083522-m02" exists ...
	I1007 13:26:30.900975  703838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1007 13:26:30.901014  703838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-083522-m02
	I1007 13:26:30.918036  703838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33649 SSHKeyPath:/home/jenkins/minikube-integration/18424-574640/.minikube/machines/multinode-083522-m02/id_rsa Username:docker}
	I1007 13:26:31.012958  703838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1007 13:26:31.025566  703838 status.go:176] multinode-083522-m02 status: &{Name:multinode-083522-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:26:31.025620  703838 status.go:174] checking status of multinode-083522-m03 ...
	I1007 13:26:31.025988  703838 cli_runner.go:164] Run: docker container inspect multinode-083522-m03 --format={{.State.Status}}
	I1007 13:26:31.044889  703838 status.go:371] multinode-083522-m03 host status = "Stopped" (err=<nil>)
	I1007 13:26:31.044987  703838 status.go:384] host is not running, skipping remaining checks
	I1007 13:26:31.045059  703838 status.go:176] multinode-083522-m03 status: &{Name:multinode-083522-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-083522 node start m03 -v=7 --alsologtostderr: (9.599232023s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.38s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (93.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-083522
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-083522
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-083522: (25.013274997s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-083522 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-083522 --wait=true -v=8 --alsologtostderr: (1m8.538539203s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-083522
--- PASS: TestMultiNode/serial/RestartKeepsNodes (93.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-083522 node delete m03: (4.846525003s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-083522 stop: (23.850382351s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-083522 status: exit status 7 (100.034494ms)

                                                
                                                
-- stdout --
	multinode-083522
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-083522-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-083522 status --alsologtostderr: exit status 7 (87.628855ms)

                                                
                                                
-- stdout --
	multinode-083522
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-083522-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:28:44.697139  712270 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:28:44.697286  712270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:28:44.697298  712270 out.go:358] Setting ErrFile to fd 2...
	I1007 13:28:44.697319  712270 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:28:44.697596  712270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
	I1007 13:28:44.697932  712270 out.go:352] Setting JSON to false
	I1007 13:28:44.697979  712270 mustload.go:65] Loading cluster: multinode-083522
	I1007 13:28:44.698083  712270 notify.go:220] Checking for updates...
	I1007 13:28:44.698466  712270 config.go:182] Loaded profile config "multinode-083522": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 13:28:44.698484  712270 status.go:174] checking status of multinode-083522 ...
	I1007 13:28:44.699481  712270 cli_runner.go:164] Run: docker container inspect multinode-083522 --format={{.State.Status}}
	I1007 13:28:44.717192  712270 status.go:371] multinode-083522 host status = "Stopped" (err=<nil>)
	I1007 13:28:44.717218  712270 status.go:384] host is not running, skipping remaining checks
	I1007 13:28:44.717226  712270 status.go:176] multinode-083522 status: &{Name:multinode-083522 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1007 13:28:44.717259  712270 status.go:174] checking status of multinode-083522-m02 ...
	I1007 13:28:44.717576  712270 cli_runner.go:164] Run: docker container inspect multinode-083522-m02 --format={{.State.Status}}
	I1007 13:28:44.734174  712270 status.go:371] multinode-083522-m02 host status = "Stopped" (err=<nil>)
	I1007 13:28:44.734197  712270 status.go:384] host is not running, skipping remaining checks
	I1007 13:28:44.734205  712270 status.go:176] multinode-083522-m02 status: &{Name:multinode-083522-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.04s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-083522 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1007 13:28:49.806396  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:29:33.380375  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-083522 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.544961577s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-083522 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-083522
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-083522-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-083522-m02 --driver=docker  --container-runtime=containerd: exit status 14 (105.721772ms)

                                                
                                                
-- stdout --
	* [multinode-083522-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-083522-m02' is duplicated with machine name 'multinode-083522-m02' in profile 'multinode-083522'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-083522-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-083522-m03 --driver=docker  --container-runtime=containerd: (31.449252352s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-083522
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-083522: exit status 80 (331.772005ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-083522 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-083522-m03 already exists in multinode-083522-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-083522-m03
E1007 13:30:12.881408  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-083522-m03: (1.976789062s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.95s)

                                                
                                    
x
+
TestPreload (114.09s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-493650 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-493650 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m14.056919491s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-493650 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-493650 image pull gcr.io/k8s-minikube/busybox: (1.931663695s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-493650
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-493650: (12.069307063s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-493650 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-493650 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (23.208525189s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-493650 image list
helpers_test.go:175: Cleaning up "test-preload-493650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-493650
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-493650: (2.511035789s)
--- PASS: TestPreload (114.09s)

                                                
                                    
x
+
TestScheduledStopUnix (109.58s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-248791 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-248791 --memory=2048 --driver=docker  --container-runtime=containerd: (33.194612106s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-248791 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-248791 -n scheduled-stop-248791
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-248791 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1007 13:32:44.768131  580163 retry.go:31] will retry after 111.757µs: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.768645  580163 retry.go:31] will retry after 169.167µs: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.770475  580163 retry.go:31] will retry after 240.089µs: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.771692  580163 retry.go:31] will retry after 356.028µs: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.772933  580163 retry.go:31] will retry after 523.966µs: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.774156  580163 retry.go:31] will retry after 495.842µs: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.775304  580163 retry.go:31] will retry after 1.584061ms: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.777606  580163 retry.go:31] will retry after 2.26255ms: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.780915  580163 retry.go:31] will retry after 2.145733ms: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.784186  580163 retry.go:31] will retry after 4.778256ms: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.789446  580163 retry.go:31] will retry after 5.395092ms: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.795728  580163 retry.go:31] will retry after 8.299133ms: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.805017  580163 retry.go:31] will retry after 7.133947ms: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.813317  580163 retry.go:31] will retry after 17.449195ms: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.831545  580163 retry.go:31] will retry after 36.867494ms: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
I1007 13:32:44.868860  580163 retry.go:31] will retry after 62.58169ms: open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/scheduled-stop-248791/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-248791 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-248791 -n scheduled-stop-248791
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-248791
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-248791 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1007 13:33:49.808539  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-248791
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-248791: exit status 7 (75.106155ms)

                                                
                                                
-- stdout --
	scheduled-stop-248791
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-248791 -n scheduled-stop-248791
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-248791 -n scheduled-stop-248791: exit status 7 (71.137724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-248791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-248791
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-248791: (4.715688504s)
--- PASS: TestScheduledStopUnix (109.58s)

                                                
                                    
x
+
TestInsufficientStorage (10.4s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-397937 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-397937 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.881090829s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"95dccd7d-4125-4f58-9bd9-b5dc92faa450","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-397937] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f09f5780-61cf-447a-8f85-5b4392e14cf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18424"}}
	{"specversion":"1.0","id":"7823e2aa-aa05-4a53-905c-c0e9993736d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c8a160e4-380b-40ac-872c-c56c951d5664","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig"}}
	{"specversion":"1.0","id":"15c4af8d-5202-4720-9349-d5e070fa8f5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube"}}
	{"specversion":"1.0","id":"b80e26d4-2232-4c93-b9a7-8446b993efc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"15673d8e-b4cc-4159-a285-907600846580","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7a956454-dbd9-4af5-8e54-079b392b30e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"29244ac0-6b8e-4634-b11a-f02150ee9f16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3b4ba959-2552-47db-ba5a-d0b1091cbc16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"31922885-f856-4e05-b01f-2afe5673ac84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"01c778f7-6265-4dbe-a0ff-7c633e202ffe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-397937\" primary control-plane node in \"insufficient-storage-397937\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3543e412-ee9f-4b41-b4d8-3603839328d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727731891-master ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f3a83c92-5ae9-4a7f-a3df-d5ca12cd7696","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"34272fa8-9956-42cd-95ca-d04ad85bb7c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-397937 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-397937 --output=json --layout=cluster: exit status 7 (297.233152ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-397937","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-397937","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:34:08.780559  730760 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-397937" does not appear in /home/jenkins/minikube-integration/18424-574640/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-397937 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-397937 --output=json --layout=cluster: exit status 7 (304.53289ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-397937","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-397937","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1007 13:34:09.085142  730820 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-397937" does not appear in /home/jenkins/minikube-integration/18424-574640/kubeconfig
	E1007 13:34:09.096581  730820 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/insufficient-storage-397937/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-397937" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-397937
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-397937: (1.913244772s)
--- PASS: TestInsufficientStorage (10.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (83.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2452794603 start -p running-upgrade-653224 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1007 13:38:49.810496  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2452794603 start -p running-upgrade-653224 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.328844834s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-653224 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1007 13:39:33.381130  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-653224 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.595318215s)
helpers_test.go:175: Cleaning up "running-upgrade-653224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-653224
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-653224: (2.989414302s)
--- PASS: TestRunningBinaryUpgrade (83.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (355.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-912109 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-912109 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.393937663s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-912109
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-912109: (1.319749929s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-912109 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-912109 status --format={{.Host}}: exit status 7 (99.232108ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-912109 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-912109 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m42.06487787s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-912109 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-912109 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-912109 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (133.233531ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-912109] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-912109
	    minikube start -p kubernetes-upgrade-912109 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9121092 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-912109 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-912109 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-912109 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.553782748s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-912109" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-912109
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-912109: (2.587293306s)
--- PASS: TestKubernetesUpgrade (355.29s)

                                                
                                    
x
+
TestMissingContainerUpgrade (179.12s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.439583910 start -p missing-upgrade-447848 --memory=2200 --driver=docker  --container-runtime=containerd
E1007 13:34:33.380939  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.439583910 start -p missing-upgrade-447848 --memory=2200 --driver=docker  --container-runtime=containerd: (1m38.733540857s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-447848
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-447848: (10.309515702s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-447848
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-447848 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-447848 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m5.530634952s)
helpers_test.go:175: Cleaning up "missing-upgrade-447848" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-447848
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-447848: (2.736721714s)
--- PASS: TestMissingContainerUpgrade (179.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-531687 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-531687 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (84.064197ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-531687] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-531687 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-531687 --driver=docker  --container-runtime=containerd: (40.081727656s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-531687 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-531687 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-531687 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.395553785s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-531687 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-531687 status -o json: exit status 2 (299.230488ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-531687","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-531687
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-531687: (2.016217306s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-531687 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-531687 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.980142006s)
--- PASS: TestNoKubernetes/serial/Start (8.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-531687 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-531687 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.803265ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-531687
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-531687: (1.207548879s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-531687 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-531687 --driver=docker  --container-runtime=containerd: (7.108449756s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-531687 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-531687 "sudo systemctl is-active --quiet service kubelet": exit status 1 (338.769026ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (85.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2596647159 start -p stopped-upgrade-362442 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2596647159 start -p stopped-upgrade-362442 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (42.359542855s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2596647159 -p stopped-upgrade-362442 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2596647159 -p stopped-upgrade-362442 stop: (1.315451441s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-362442 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-362442 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.533080679s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (85.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-362442
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-362442: (1.182065308s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.18s)

                                                
                                    
x
+
TestPause/serial/Start (53.15s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-607162 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-607162 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (53.150798765s)
--- PASS: TestPause/serial/Start (53.15s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.82s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-607162 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-607162 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.793525908s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.82s)

                                                
                                    
x
+
TestPause/serial/Pause (1.16s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-607162 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-607162 --alsologtostderr -v=5: (1.15770441s)
--- PASS: TestPause/serial/Pause (1.16s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-607162 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-607162 --output=json --layout=cluster: exit status 2 (493.498955ms)

                                                
                                                
-- stdout --
	{"Name":"pause-607162","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-607162","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.49s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-607162 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.95s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.99s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-607162 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.99s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-607162 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-607162 --alsologtostderr -v=5: (3.053250538s)
--- PASS: TestPause/serial/DeletePaused (3.05s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-607162
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-607162: exit status 1 (22.697818ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-607162: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-180537 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-180537 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (261.311899ms)

                                                
                                                
-- stdout --
	* [false-180537] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=18424
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1007 13:41:29.950972  771313 out.go:345] Setting OutFile to fd 1 ...
	I1007 13:41:29.951209  771313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:41:29.951238  771313 out.go:358] Setting ErrFile to fd 2...
	I1007 13:41:29.951262  771313 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1007 13:41:29.951546  771313 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18424-574640/.minikube/bin
	I1007 13:41:29.952018  771313 out.go:352] Setting JSON to false
	I1007 13:41:29.952988  771313 start.go:129] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12239,"bootTime":1728296251,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1007 13:41:29.953093  771313 start.go:139] virtualization:  
	I1007 13:41:29.956004  771313 out.go:177] * [false-180537] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1007 13:41:29.959172  771313 out.go:177]   - MINIKUBE_LOCATION=18424
	I1007 13:41:29.959245  771313 notify.go:220] Checking for updates...
	I1007 13:41:29.964049  771313 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1007 13:41:29.966185  771313 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18424-574640/kubeconfig
	I1007 13:41:29.968293  771313 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18424-574640/.minikube
	I1007 13:41:29.970319  771313 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1007 13:41:29.972261  771313 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1007 13:41:29.974778  771313 config.go:182] Loaded profile config "force-systemd-flag-040234": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1007 13:41:29.974956  771313 driver.go:394] Setting default libvirt URI to qemu:///system
	I1007 13:41:30.009726  771313 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1007 13:41:30.009950  771313 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1007 13:41:30.124017  771313 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-07 13:41:30.109471371 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1007 13:41:30.124147  771313 docker.go:318] overlay module found
	I1007 13:41:30.126779  771313 out.go:177] * Using the docker driver based on user configuration
	I1007 13:41:30.129026  771313 start.go:297] selected driver: docker
	I1007 13:41:30.129050  771313 start.go:901] validating driver "docker" against <nil>
	I1007 13:41:30.129066  771313 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1007 13:41:30.131625  771313 out.go:201] 
	W1007 13:41:30.133604  771313 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1007 13:41:30.135708  771313 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-180537 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-180537

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-180537

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-180537

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-180537

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-180537

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-180537

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-180537

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-180537

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-180537

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-180537

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-180537

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-180537" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-180537" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-180537

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-180537"

                                                
                                                
----------------------- debugLogs end: false-180537 [took: 4.229157296s] --------------------------------
helpers_test.go:175: Cleaning up "false-180537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-180537
--- PASS: TestNetworkPlugins/group/false (4.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (155.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-716021 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1007 13:43:49.806252  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:44:33.381195  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-716021 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m35.187027393s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (155.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-716021 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1e3fde9c-2a76-46c2-9871-bad833834cc2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1e3fde9c-2a76-46c2-9871-bad833834cc2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004762526s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-716021 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-716021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-716021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.55640593s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-716021 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-716021 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-716021 --alsologtostderr -v=3: (12.855032739s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (75.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-178678 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-178678 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m15.285512056s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (75.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-716021 -n old-k8s-version-716021
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-716021 -n old-k8s-version-716021: exit status 7 (103.522685ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-716021 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-178678 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ec39d460-bee1-45d6-b4eb-1944efae2282] Pending
helpers_test.go:344: "busybox" [ec39d460-bee1-45d6-b4eb-1944efae2282] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ec39d460-bee1-45d6-b4eb-1944efae2282] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004757657s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-178678 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-178678 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-178678 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.098973275s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-178678 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-178678 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-178678 --alsologtostderr -v=3: (12.093250289s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178678 -n no-preload-178678
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178678 -n no-preload-178678: exit status 7 (77.795828ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-178678 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (277.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-178678 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1007 13:48:49.806419  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:49:33.380115  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-178678 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m36.726715655s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-178678 -n no-preload-178678
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (277.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7wfpl" [033ed21b-2a6f-4558-af7f-72ff35d666ee] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003459181s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7wfpl" [033ed21b-2a6f-4558-af7f-72ff35d666ee] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006791453s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-178678 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pm244" [3fb763c1-e821-46ca-96cd-23af88ce8319] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005442928s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-178678 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-178678 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178678 -n no-preload-178678
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178678 -n no-preload-178678: exit status 2 (330.382366ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-178678 -n no-preload-178678
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-178678 -n no-preload-178678: exit status 2 (313.692081ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-178678 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-178678 -n no-preload-178678
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-178678 -n no-preload-178678
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-pm244" [3fb763c1-e821-46ca-96cd-23af88ce8319] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004263195s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-716021 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (83.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-520248 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-520248 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m23.384130077s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (83.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-716021 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-716021 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-716021 --alsologtostderr -v=1: (1.490267313s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-716021 -n old-k8s-version-716021
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-716021 -n old-k8s-version-716021: exit status 2 (530.848241ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-716021 -n old-k8s-version-716021
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-716021 -n old-k8s-version-716021: exit status 2 (481.816285ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-716021 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-716021 -n old-k8s-version-716021
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-716021 -n old-k8s-version-716021
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.85s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-412568 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-412568 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (51.854448395s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-412568 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7cea5e2f-8349-428a-9f1e-6def82acc2c4] Pending
helpers_test.go:344: "busybox" [7cea5e2f-8349-428a-9f1e-6def82acc2c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7cea5e2f-8349-428a-9f1e-6def82acc2c4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003923193s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-412568 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-412568 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-412568 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.049575339s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-412568 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-412568 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-412568 --alsologtostderr -v=3: (12.151055155s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-520248 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [967cfb99-7374-4473-bbc4-ce985e016b78] Pending
helpers_test.go:344: "busybox" [967cfb99-7374-4473-bbc4-ce985e016b78] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [967cfb99-7374-4473-bbc4-ce985e016b78] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004088094s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-520248 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-412568 -n default-k8s-diff-port-412568
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-412568 -n default-k8s-diff-port-412568: exit status 7 (80.23478ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-412568 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-412568 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1007 13:53:49.807455  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-412568 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.348027024s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-412568 -n default-k8s-diff-port-412568
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.66s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-520248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-520248 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.46026002s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-520248 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.66s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-520248 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-520248 --alsologtostderr -v=3: (12.37622092s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-520248 -n embed-certs-520248
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-520248 -n embed-certs-520248: exit status 7 (90.43532ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-520248 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (291.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-520248 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1007 13:54:33.380854  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:33.275560  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:33.281971  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:33.293483  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:33.315088  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:33.356549  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:33.438223  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:33.599825  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:33.921489  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:34.563513  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:35.845429  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:38.406788  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:43.528977  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:55:53.770907  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:56:14.253317  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:56:55.215250  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:02.085500  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:02.092045  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:02.103501  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:02.124940  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:02.166445  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:02.247879  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:02.409399  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:02.731322  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:03.373141  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:04.655086  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:07.216625  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:12.338963  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:22.580900  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:57:43.062181  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-520248 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m51.296823015s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-520248 -n embed-certs-520248
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (291.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6cnwq" [9dae418f-a89e-4315-a8cb-b414212fa466] Running
E1007 13:58:17.138761  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005134952s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-6cnwq" [9dae418f-a89e-4315-a8cb-b414212fa466] Running
E1007 13:58:24.024391  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003580446s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-412568 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-412568 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-412568 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-412568 -n default-k8s-diff-port-412568
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-412568 -n default-k8s-diff-port-412568: exit status 2 (349.670009ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-412568 -n default-k8s-diff-port-412568
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-412568 -n default-k8s-diff-port-412568: exit status 2 (335.400568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-412568 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-412568 -n default-k8s-diff-port-412568
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-412568 -n default-k8s-diff-port-412568
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-961047 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1007 13:58:49.805881  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-961047 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (39.153771157s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qmnp8" [74f61deb-2b52-46e7-a822-20f094a4d313] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003330283s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-qmnp8" [74f61deb-2b52-46e7-a822-20f094a4d313] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00394326s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-520248 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-520248 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-520248 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-520248 --alsologtostderr -v=1: (1.413139159s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-520248 -n embed-certs-520248
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-520248 -n embed-certs-520248: exit status 2 (431.386563ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-520248 -n embed-certs-520248
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-520248 -n embed-certs-520248: exit status 2 (471.360854ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-520248 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p embed-certs-520248 --alsologtostderr -v=1: (1.110931293s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-520248 -n embed-certs-520248
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-520248 -n embed-certs-520248
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.67s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-961047 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-961047 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.770429283s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-961047 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-961047 --alsologtostderr -v=3: (1.549443276s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-961047 -n newest-cni-961047
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-961047 -n newest-cni-961047: exit status 7 (93.336388ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-961047 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-961047 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-961047 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (21.516743526s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-961047 -n newest-cni-961047
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1007 13:59:16.458830  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
E1007 13:59:33.380442  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m41.320666884s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-961047 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-961047 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-961047 --alsologtostderr -v=1: (1.446963776s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-961047 -n newest-cni-961047
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-961047 -n newest-cni-961047: exit status 2 (384.380069ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-961047 -n newest-cni-961047
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-961047 -n newest-cni-961047: exit status 2 (434.388945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-961047 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-961047 --alsologtostderr -v=1: (1.012499887s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-961047 -n newest-cni-961047
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-961047 -n newest-cni-961047
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.63s)
E1007 14:05:33.275633  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:05:58.246538  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:05:58.252975  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:05:58.264414  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:05:58.285838  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:05:58.327232  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:05:58.408753  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:05:58.570599  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:05:58.892243  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:05:59.534341  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:00.816222  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:03.378536  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:08.282226  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/default-k8s-diff-port-412568/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:08.500010  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (94.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1007 13:59:45.946379  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:00:33.275639  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m34.573745831s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (94.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-180537 "pgrep -a kubelet"
I1007 14:00:57.962367  580163 config.go:182] Loaded profile config "auto-180537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-180537 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mqt9t" [79c68b4c-7f0e-4e0d-bf3a-c3cd78ae0bd2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1007 14:01:00.981166  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/old-k8s-version-716021/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-mqt9t" [79c68b4c-7f0e-4e0d-bf3a-c3cd78ae0bd2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003777395s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-180537 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-nswbl" [58e70420-4ce2-4a0d-aea8-79c6307a31e0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003668576s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-180537 "pgrep -a kubelet"
I1007 14:01:24.747673  580163 config.go:182] Loaded profile config "kindnet-180537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-180537 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-crcdq" [ea4ec503-263f-413b-a93b-04e00cacaae0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-crcdq" [ea4ec503-263f-413b-a93b-04e00cacaae0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.005609224s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m14.464017538s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-180537 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (55.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1007 14:02:02.085680  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:02:29.787718  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/no-preload-178678/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (55.883261058s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (55.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-pf8f9" [870cbb90-17ab-45b4-98e3-907e963db548] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004775985s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-180537 "pgrep -a kubelet"
I1007 14:02:51.982359  580163 config.go:182] Loaded profile config "calico-180537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-180537 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-lr8gn" [28d87a16-a638-4e96-b096-ed3ba92c1232] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-lr8gn" [28d87a16-a638-4e96-b096-ed3ba92c1232] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004112718s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-180537 "pgrep -a kubelet"
I1007 14:02:56.047199  580163 config.go:182] Loaded profile config "custom-flannel-180537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-180537 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tvkt8" [676dd5f0-1dfe-40dd-9373-4dceaf8f7a63] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tvkt8" [676dd5f0-1dfe-40dd-9373-4dceaf8f7a63] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004657604s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-180537 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-180537 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (51.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1007 14:03:29.553847  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/default-k8s-diff-port-412568/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (51.858564713s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (51.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1007 14:03:32.884590  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:03:34.675189  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/default-k8s-diff-port-412568/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:03:44.917126  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/default-k8s-diff-port-412568/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:03:49.806325  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/functional-389582/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:04:05.398862  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/default-k8s-diff-port-412568/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m0.820195129s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-180537 "pgrep -a kubelet"
I1007 14:04:21.324493  580163 config.go:182] Loaded profile config "enable-default-cni-180537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-180537 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-69kvl" [9b235cb3-b2a8-43e6-80c7-5926ad4a4110] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-69kvl" [9b235cb3-b2a8-43e6-80c7-5926ad4a4110] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00454775s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-180537 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-xpcnh" [6d023184-fe9a-4e20-bef3-f311c8862d5e] Running
E1007 14:04:33.380893  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/addons-956205/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004219387s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-180537 "pgrep -a kubelet"
I1007 14:04:38.979914  580163 config.go:182] Loaded profile config "flannel-180537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-180537 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-df5wn" [a05b12f8-ec95-4ee7-878f-87d9b810e405] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-df5wn" [a05b12f8-ec95-4ee7-878f-87d9b810e405] Running
E1007 14:04:46.360907  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/default-k8s-diff-port-412568/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003910172s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-180537 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-180537 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m17.74036854s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-180537 "pgrep -a kubelet"
I1007 14:06:11.798788  580163 config.go:182] Loaded profile config "bridge-180537": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-180537 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-sv2vw" [d6658e39-f004-4667-ba83-cf7238bad2ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-sv2vw" [d6658e39-f004-4667-ba83-cf7238bad2ee] Running
E1007 14:06:18.441496  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/kindnet-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:18.448003  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/kindnet-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:18.459430  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/kindnet-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:18.480891  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/kindnet-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:18.522414  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/kindnet-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:18.603917  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/kindnet-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:18.741382  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/auto-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:18.765790  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/kindnet-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:19.087859  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/kindnet-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:19.730003  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/kindnet-180537/client.crt: no such file or directory" logger="UnhandledError"
E1007 14:06:21.011631  580163 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/18424-574640/.minikube/profiles/kindnet-180537/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003986532s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-180537 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-180537 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-349416 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-349416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-349416
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-031658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-031658
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-180537 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-180537

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-180537

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-180537

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-180537

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-180537

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-180537

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-180537

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-180537

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-180537

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-180537

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-180537

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-180537" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-180537" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-180537

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-180537"

                                                
                                                
----------------------- debugLogs end: kubenet-180537 [took: 4.413442699s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-180537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-180537
--- SKIP: TestNetworkPlugins/group/kubenet (4.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-180537 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-180537" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-180537

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-180537" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-180537"

                                                
                                                
----------------------- debugLogs end: cilium-180537 [took: 5.213129259s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-180537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-180537
--- SKIP: TestNetworkPlugins/group/cilium (5.43s)

                                                
                                    
Copied to clipboard