Test Report: Docker_Linux_containerd_arm64 19736

                    
                      c03ccee26a80b9ecde7f622e8f7f7412408a7b8a:2024-10-01:36456
                    
                

Test fail (2/327)

Order failed test Duration
29 TestAddons/serial/Volcano 210.96
301 TestStartStop/group/old-k8s-version/serial/SecondStart 382.73
x
+
TestAddons/serial/Volcano (210.96s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:801: volcano-scheduler stabilized in 48.949644ms
addons_test.go:809: volcano-admission stabilized in 49.072612ms
addons_test.go:817: volcano-controller stabilized in 49.125008ms
addons_test.go:823: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-tjdzc" [0af7cabe-6b84-4c0b-8959-afea6e6008a5] Running
addons_test.go:823: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003778961s
addons_test.go:827: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-2pl84" [366086b6-1aec-497a-84b2-b5aa42215286] Running
addons_test.go:827: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003530181s
addons_test.go:831: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-ncwd4" [363a4817-8687-4b74-b875-cc67ed8963ac] Running
addons_test.go:831: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003339979s
addons_test.go:836: (dbg) Run:  kubectl --context addons-164127 delete -n volcano-system job volcano-admission-init
addons_test.go:842: (dbg) Run:  kubectl --context addons-164127 create -f testdata/vcjob.yaml
addons_test.go:850: (dbg) Run:  kubectl --context addons-164127 get vcjob -n my-volcano
addons_test.go:868: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a7c78ffc-96fb-4acd-8adb-15478994d2e0] Pending
helpers_test.go:344: "test-job-nginx-0" [a7c78ffc-96fb-4acd-8adb-15478994d2e0] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:868: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:868: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-164127 -n addons-164127
addons_test.go:868: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-10-01 19:53:37.362521118 +0000 UTC m=+428.304352335
addons_test.go:868: (dbg) Run:  kubectl --context addons-164127 describe po test-job-nginx-0 -n my-volcano
addons_test.go:868: (dbg) kubectl --context addons-164127 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-1aeaac41-ae3e-419e-9798-79d7a272303f
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j6dbk (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-j6dbk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:868: (dbg) Run:  kubectl --context addons-164127 logs test-job-nginx-0 -n my-volcano
addons_test.go:868: (dbg) kubectl --context addons-164127 logs test-job-nginx-0 -n my-volcano:
addons_test.go:869: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-164127
helpers_test.go:235: (dbg) docker inspect addons-164127:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "845b45e186cb324d62aeabaee48b84dcbf3be9a12a621e52e5c4dacb9e2ccecf",
	        "Created": "2024-10-01T19:47:11.955060832Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 742509,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-01T19:47:12.091271699Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/845b45e186cb324d62aeabaee48b84dcbf3be9a12a621e52e5c4dacb9e2ccecf/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/845b45e186cb324d62aeabaee48b84dcbf3be9a12a621e52e5c4dacb9e2ccecf/hostname",
	        "HostsPath": "/var/lib/docker/containers/845b45e186cb324d62aeabaee48b84dcbf3be9a12a621e52e5c4dacb9e2ccecf/hosts",
	        "LogPath": "/var/lib/docker/containers/845b45e186cb324d62aeabaee48b84dcbf3be9a12a621e52e5c4dacb9e2ccecf/845b45e186cb324d62aeabaee48b84dcbf3be9a12a621e52e5c4dacb9e2ccecf-json.log",
	        "Name": "/addons-164127",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-164127:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-164127",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/46d2ef539b0f5f9792ebc63ac5e1418aad08ed60d01fe0881282d89d9d92cb33-init/diff:/var/lib/docker/overlay2/bda54826f89b5827b169734fdf2fa880f8697dc2c03a301f63e7d6df420607d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/46d2ef539b0f5f9792ebc63ac5e1418aad08ed60d01fe0881282d89d9d92cb33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/46d2ef539b0f5f9792ebc63ac5e1418aad08ed60d01fe0881282d89d9d92cb33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/46d2ef539b0f5f9792ebc63ac5e1418aad08ed60d01fe0881282d89d9d92cb33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-164127",
	                "Source": "/var/lib/docker/volumes/addons-164127/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-164127",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-164127",
	                "name.minikube.sigs.k8s.io": "addons-164127",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "135a285c4d15fc0d607adeae26face9b0507edd91e299393cd9d474ade0f0858",
	            "SandboxKey": "/var/run/docker/netns/135a285c4d15",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33533"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33534"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33537"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33535"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33536"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-164127": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3ff66a6a5a19d9aa7171c3e45388e889fce6b3f80ae02f744dab8ddee3b228bd",
	                    "EndpointID": "12625587d55abec68dd7d3cad4985acd84b794ba0e91f274430142bea49aa88b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-164127",
	                        "845b45e186cb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-164127 -n addons-164127
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-164127 logs -n 25: (1.576132033s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-037780   | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC |                     |
	|         | -p download-only-037780              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC | 01 Oct 24 19:46 UTC |
	| delete  | -p download-only-037780              | download-only-037780   | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC | 01 Oct 24 19:46 UTC |
	| start   | -o=json --download-only              | download-only-543885   | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC |                     |
	|         | -p download-only-543885              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC | 01 Oct 24 19:46 UTC |
	| delete  | -p download-only-543885              | download-only-543885   | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC | 01 Oct 24 19:46 UTC |
	| delete  | -p download-only-037780              | download-only-037780   | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC | 01 Oct 24 19:46 UTC |
	| delete  | -p download-only-543885              | download-only-543885   | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC | 01 Oct 24 19:46 UTC |
	| start   | --download-only -p                   | download-docker-084095 | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC |                     |
	|         | download-docker-084095               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-084095            | download-docker-084095 | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC | 01 Oct 24 19:46 UTC |
	| start   | --download-only -p                   | binary-mirror-849941   | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC |                     |
	|         | binary-mirror-849941                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43939               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-849941              | binary-mirror-849941   | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC | 01 Oct 24 19:46 UTC |
	| addons  | disable dashboard -p                 | addons-164127          | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC |                     |
	|         | addons-164127                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-164127          | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC |                     |
	|         | addons-164127                        |                        |         |         |                     |                     |
	| start   | -p addons-164127 --wait=true         | addons-164127          | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC | 01 Oct 24 19:50 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 19:46:48
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 19:46:48.618526  742018 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:46:48.618651  742018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:46:48.618661  742018 out.go:358] Setting ErrFile to fd 2...
	I1001 19:46:48.618667  742018 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:46:48.618902  742018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
	I1001 19:46:48.619321  742018 out.go:352] Setting JSON to false
	I1001 19:46:48.620210  742018 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12556,"bootTime":1727799453,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 19:46:48.620282  742018 start.go:139] virtualization:  
	I1001 19:46:48.623911  742018 out.go:177] * [addons-164127] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1001 19:46:48.625779  742018 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 19:46:48.625849  742018 notify.go:220] Checking for updates...
	I1001 19:46:48.629505  742018 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:46:48.631334  742018 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	I1001 19:46:48.633268  742018 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	I1001 19:46:48.635102  742018 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 19:46:48.636932  742018 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 19:46:48.638866  742018 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:46:48.666632  742018 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 19:46:48.666752  742018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 19:46:48.718009  742018 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-01 19:46:48.708923865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 19:46:48.718120  742018 docker.go:318] overlay module found
	I1001 19:46:48.720127  742018 out.go:177] * Using the docker driver based on user configuration
	I1001 19:46:48.721875  742018 start.go:297] selected driver: docker
	I1001 19:46:48.721894  742018 start.go:901] validating driver "docker" against <nil>
	I1001 19:46:48.721922  742018 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 19:46:48.722550  742018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 19:46:48.769299  742018 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-01 19:46:48.759933954 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 19:46:48.769516  742018 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 19:46:48.769733  742018 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:46:48.771498  742018 out.go:177] * Using Docker driver with root privileges
	I1001 19:46:48.773551  742018 cni.go:84] Creating CNI manager for ""
	I1001 19:46:48.773621  742018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 19:46:48.773636  742018 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 19:46:48.773710  742018 start.go:340] cluster config:
	{Name:addons-164127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-164127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:46:48.775995  742018 out.go:177] * Starting "addons-164127" primary control-plane node in "addons-164127" cluster
	I1001 19:46:48.777959  742018 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1001 19:46:48.780022  742018 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1001 19:46:48.781599  742018 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1001 19:46:48.781653  742018 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1001 19:46:48.781665  742018 cache.go:56] Caching tarball of preloaded images
	I1001 19:46:48.781688  742018 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 19:46:48.781748  742018 preload.go:172] Found /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 19:46:48.781758  742018 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1001 19:46:48.782116  742018 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/config.json ...
	I1001 19:46:48.782140  742018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/config.json: {Name:mk7e4e78d684ab18787161973cfd5253baa8c825 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:46:48.795610  742018 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 19:46:48.795745  742018 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 19:46:48.795767  742018 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1001 19:46:48.795773  742018 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1001 19:46:48.795784  742018 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1001 19:46:48.795794  742018 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from local cache
	I1001 19:47:05.597150  742018 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 from cached tarball
	I1001 19:47:05.597190  742018 cache.go:194] Successfully downloaded all kic artifacts
	I1001 19:47:05.597246  742018 start.go:360] acquireMachinesLock for addons-164127: {Name:mke209b7b2b0f9ecef232e67f5b82469b93bd150 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 19:47:05.597363  742018 start.go:364] duration metric: took 92.741µs to acquireMachinesLock for "addons-164127"
	I1001 19:47:05.597396  742018 start.go:93] Provisioning new machine with config: &{Name:addons-164127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-164127 Namespace:default APIServerHAVIP: APIServerName:mi
nikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1001 19:47:05.597486  742018 start.go:125] createHost starting for "" (driver="docker")
	I1001 19:47:05.600317  742018 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1001 19:47:05.600594  742018 start.go:159] libmachine.API.Create for "addons-164127" (driver="docker")
	I1001 19:47:05.600628  742018 client.go:168] LocalClient.Create starting
	I1001 19:47:05.600740  742018 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem
	I1001 19:47:05.970181  742018 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/cert.pem
	I1001 19:47:06.492413  742018 cli_runner.go:164] Run: docker network inspect addons-164127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1001 19:47:06.507946  742018 cli_runner.go:211] docker network inspect addons-164127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1001 19:47:06.508044  742018 network_create.go:284] running [docker network inspect addons-164127] to gather additional debugging logs...
	I1001 19:47:06.508066  742018 cli_runner.go:164] Run: docker network inspect addons-164127
	W1001 19:47:06.523555  742018 cli_runner.go:211] docker network inspect addons-164127 returned with exit code 1
	I1001 19:47:06.523589  742018 network_create.go:287] error running [docker network inspect addons-164127]: docker network inspect addons-164127: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-164127 not found
	I1001 19:47:06.523603  742018 network_create.go:289] output of [docker network inspect addons-164127]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-164127 not found
	
	** /stderr **
	I1001 19:47:06.523702  742018 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 19:47:06.539611  742018 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d6a2e0}
	I1001 19:47:06.539658  742018 network_create.go:124] attempt to create docker network addons-164127 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1001 19:47:06.539716  742018 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-164127 addons-164127
	I1001 19:47:06.606520  742018 network_create.go:108] docker network addons-164127 192.168.49.0/24 created
	I1001 19:47:06.606553  742018 kic.go:121] calculated static IP "192.168.49.2" for the "addons-164127" container
	I1001 19:47:06.606635  742018 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1001 19:47:06.620183  742018 cli_runner.go:164] Run: docker volume create addons-164127 --label name.minikube.sigs.k8s.io=addons-164127 --label created_by.minikube.sigs.k8s.io=true
	I1001 19:47:06.636655  742018 oci.go:103] Successfully created a docker volume addons-164127
	I1001 19:47:06.636746  742018 cli_runner.go:164] Run: docker run --rm --name addons-164127-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-164127 --entrypoint /usr/bin/test -v addons-164127:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1001 19:47:07.905220  742018 cli_runner.go:217] Completed: docker run --rm --name addons-164127-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-164127 --entrypoint /usr/bin/test -v addons-164127:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib: (1.268425639s)
	I1001 19:47:07.905253  742018 oci.go:107] Successfully prepared a docker volume addons-164127
	I1001 19:47:07.905273  742018 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1001 19:47:07.905292  742018 kic.go:194] Starting extracting preloaded images to volume ...
	I1001 19:47:07.905368  742018 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-164127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1001 19:47:11.889881  742018 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-164127:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (3.98446675s)
	I1001 19:47:11.889915  742018 kic.go:203] duration metric: took 3.984620527s to extract preloaded images to volume ...
	W1001 19:47:11.890055  742018 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1001 19:47:11.890160  742018 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1001 19:47:11.940916  742018 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-164127 --name addons-164127 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-164127 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-164127 --network addons-164127 --ip 192.168.49.2 --volume addons-164127:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1001 19:47:12.247735  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Running}}
	I1001 19:47:12.275259  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:12.295768  742018 cli_runner.go:164] Run: docker exec addons-164127 stat /var/lib/dpkg/alternatives/iptables
	I1001 19:47:12.357725  742018 oci.go:144] the created container "addons-164127" has a running status.
	I1001 19:47:12.357754  742018 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa...
	I1001 19:47:13.054015  742018 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1001 19:47:13.075660  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:13.094690  742018 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1001 19:47:13.094712  742018 kic_runner.go:114] Args: [docker exec --privileged addons-164127 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1001 19:47:13.160513  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:13.180595  742018 machine.go:93] provisionDockerMachine start ...
	I1001 19:47:13.180688  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:13.204592  742018 main.go:141] libmachine: Using SSH client type: native
	I1001 19:47:13.204871  742018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I1001 19:47:13.204881  742018 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 19:47:13.339795  742018 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-164127
	
	I1001 19:47:13.339821  742018 ubuntu.go:169] provisioning hostname "addons-164127"
	I1001 19:47:13.339887  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:13.358189  742018 main.go:141] libmachine: Using SSH client type: native
	I1001 19:47:13.358432  742018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I1001 19:47:13.358451  742018 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-164127 && echo "addons-164127" | sudo tee /etc/hostname
	I1001 19:47:13.518160  742018 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-164127
	
	I1001 19:47:13.518463  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:13.540369  742018 main.go:141] libmachine: Using SSH client type: native
	I1001 19:47:13.540658  742018 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I1001 19:47:13.540693  742018 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-164127' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-164127/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-164127' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 19:47:13.672431  742018 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 19:47:13.672481  742018 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19736-735883/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-735883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-735883/.minikube}
	I1001 19:47:13.672533  742018 ubuntu.go:177] setting up certificates
	I1001 19:47:13.672543  742018 provision.go:84] configureAuth start
	I1001 19:47:13.672620  742018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-164127
	I1001 19:47:13.688970  742018 provision.go:143] copyHostCerts
	I1001 19:47:13.689052  742018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-735883/.minikube/ca.pem (1078 bytes)
	I1001 19:47:13.689181  742018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-735883/.minikube/cert.pem (1123 bytes)
	I1001 19:47:13.689243  742018 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-735883/.minikube/key.pem (1679 bytes)
	I1001 19:47:13.689295  742018 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-735883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca-key.pem org=jenkins.addons-164127 san=[127.0.0.1 192.168.49.2 addons-164127 localhost minikube]
	I1001 19:47:13.990486  742018 provision.go:177] copyRemoteCerts
	I1001 19:47:13.990557  742018 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 19:47:13.990603  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:14.007482  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:14.100792  742018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 19:47:14.123519  742018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1001 19:47:14.147050  742018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1001 19:47:14.170217  742018 provision.go:87] duration metric: took 497.650018ms to configureAuth
	I1001 19:47:14.170241  742018 ubuntu.go:193] setting minikube options for container-runtime
	I1001 19:47:14.170423  742018 config.go:182] Loaded profile config "addons-164127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 19:47:14.170430  742018 machine.go:96] duration metric: took 989.817932ms to provisionDockerMachine
	I1001 19:47:14.170437  742018 client.go:171] duration metric: took 8.569802324s to LocalClient.Create
	I1001 19:47:14.170460  742018 start.go:167] duration metric: took 8.56986798s to libmachine.API.Create "addons-164127"
	I1001 19:47:14.170469  742018 start.go:293] postStartSetup for "addons-164127" (driver="docker")
	I1001 19:47:14.170479  742018 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 19:47:14.170532  742018 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 19:47:14.170572  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:14.186115  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:14.285188  742018 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 19:47:14.288045  742018 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1001 19:47:14.288079  742018 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1001 19:47:14.288119  742018 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1001 19:47:14.288128  742018 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1001 19:47:14.288142  742018 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-735883/.minikube/addons for local assets ...
	I1001 19:47:14.288213  742018 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-735883/.minikube/files for local assets ...
	I1001 19:47:14.288243  742018 start.go:296] duration metric: took 117.765942ms for postStartSetup
	I1001 19:47:14.288574  742018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-164127
	I1001 19:47:14.303588  742018 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/config.json ...
	I1001 19:47:14.303861  742018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 19:47:14.303947  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:14.318726  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:14.408948  742018 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1001 19:47:14.412991  742018 start.go:128] duration metric: took 8.815488189s to createHost
	I1001 19:47:14.413015  742018 start.go:83] releasing machines lock for "addons-164127", held for 8.815637117s
	I1001 19:47:14.413087  742018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-164127
	I1001 19:47:14.428567  742018 ssh_runner.go:195] Run: cat /version.json
	I1001 19:47:14.428619  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:14.428689  742018 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 19:47:14.428760  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:14.450625  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:14.460645  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:14.665212  742018 ssh_runner.go:195] Run: systemctl --version
	I1001 19:47:14.669384  742018 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 19:47:14.673256  742018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1001 19:47:14.696415  742018 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1001 19:47:14.696509  742018 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 19:47:14.724576  742018 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1001 19:47:14.724599  742018 start.go:495] detecting cgroup driver to use...
	I1001 19:47:14.724634  742018 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1001 19:47:14.724693  742018 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1001 19:47:14.736679  742018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 19:47:14.748194  742018 docker.go:217] disabling cri-docker service (if available) ...
	I1001 19:47:14.748269  742018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 19:47:14.761732  742018 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 19:47:14.775898  742018 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 19:47:14.863390  742018 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 19:47:14.962533  742018 docker.go:233] disabling docker service ...
	I1001 19:47:14.962600  742018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 19:47:14.982196  742018 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 19:47:14.994334  742018 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 19:47:15.087498  742018 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 19:47:15.185252  742018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 19:47:15.196720  742018 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 19:47:15.212591  742018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1001 19:47:15.222036  742018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1001 19:47:15.231353  742018 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1001 19:47:15.231427  742018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1001 19:47:15.240441  742018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 19:47:15.249761  742018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1001 19:47:15.258979  742018 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 19:47:15.268558  742018 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 19:47:15.277729  742018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1001 19:47:15.287054  742018 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1001 19:47:15.296369  742018 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1001 19:47:15.306232  742018 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 19:47:15.314417  742018 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 19:47:15.322511  742018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:47:15.413135  742018 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1001 19:47:15.543409  742018 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1001 19:47:15.543500  742018 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1001 19:47:15.547116  742018 start.go:563] Will wait 60s for crictl version
	I1001 19:47:15.547189  742018 ssh_runner.go:195] Run: which crictl
	I1001 19:47:15.550770  742018 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 19:47:15.588968  742018 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1001 19:47:15.589061  742018 ssh_runner.go:195] Run: containerd --version
	I1001 19:47:15.614961  742018 ssh_runner.go:195] Run: containerd --version
	I1001 19:47:15.639206  742018 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1001 19:47:15.641629  742018 cli_runner.go:164] Run: docker network inspect addons-164127 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 19:47:15.656787  742018 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1001 19:47:15.660305  742018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:47:15.670988  742018 kubeadm.go:883] updating cluster {Name:addons-164127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-164127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 19:47:15.671111  742018 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1001 19:47:15.671175  742018 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:47:15.706771  742018 containerd.go:627] all images are preloaded for containerd runtime.
	I1001 19:47:15.706796  742018 containerd.go:534] Images already preloaded, skipping extraction
	I1001 19:47:15.706856  742018 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 19:47:15.741074  742018 containerd.go:627] all images are preloaded for containerd runtime.
	I1001 19:47:15.741099  742018 cache_images.go:84] Images are preloaded, skipping loading
	I1001 19:47:15.741107  742018 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I1001 19:47:15.741201  742018 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-164127 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-164127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 19:47:15.741269  742018 ssh_runner.go:195] Run: sudo crictl info
	I1001 19:47:15.775657  742018 cni.go:84] Creating CNI manager for ""
	I1001 19:47:15.775680  742018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 19:47:15.775692  742018 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 19:47:15.775716  742018 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-164127 NodeName:addons-164127 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 19:47:15.775852  742018 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-164127"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 19:47:15.775924  742018 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 19:47:15.784418  742018 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 19:47:15.784501  742018 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 19:47:15.792934  742018 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1001 19:47:15.810135  742018 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 19:47:15.827490  742018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I1001 19:47:15.844424  742018 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1001 19:47:15.848037  742018 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 19:47:15.860551  742018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:47:15.947745  742018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:47:15.961600  742018 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127 for IP: 192.168.49.2
	I1001 19:47:15.961635  742018 certs.go:194] generating shared ca certs ...
	I1001 19:47:15.961653  742018 certs.go:226] acquiring lock for ca certs: {Name:mk132cf96fd4e71a64bde5e1335b23d155d99f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:15.962382  742018 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-735883/.minikube/ca.key
	I1001 19:47:16.113349  742018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-735883/.minikube/ca.crt ...
	I1001 19:47:16.113377  742018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/ca.crt: {Name:mk02a26878d280785ee740edf5e0f78442d34596 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:16.113570  742018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-735883/.minikube/ca.key ...
	I1001 19:47:16.113584  742018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/ca.key: {Name:mk0fb3dd5e803bf9792ab0d30a9e8d56a7b547be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:16.114352  742018 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.key
	I1001 19:47:16.981907  742018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.crt ...
	I1001 19:47:16.981943  742018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.crt: {Name:mk4ae2a88da4dabc5471956bfe3a11742585e54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:16.982647  742018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.key ...
	I1001 19:47:16.982667  742018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.key: {Name:mkd5def06490c54c3850cd812d652ed637045f77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:16.982760  742018 certs.go:256] generating profile certs ...
	I1001 19:47:16.982822  742018 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.key
	I1001 19:47:16.982842  742018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt with IP's: []
	I1001 19:47:17.222916  742018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt ...
	I1001 19:47:17.222947  742018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: {Name:mk09c76bda598c00121bd8a447100d2f0878abe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:17.223126  742018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.key ...
	I1001 19:47:17.223139  742018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.key: {Name:mke3163d24c108a65cd7039b0498f086178dd9de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:17.223721  742018 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/apiserver.key.f3ad724d
	I1001 19:47:17.223746  742018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/apiserver.crt.f3ad724d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1001 19:47:17.457235  742018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/apiserver.crt.f3ad724d ...
	I1001 19:47:17.457264  742018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/apiserver.crt.f3ad724d: {Name:mkbf756b2900367c38a3c2c61bb79f65921bfc73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:17.457444  742018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/apiserver.key.f3ad724d ...
	I1001 19:47:17.457457  742018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/apiserver.key.f3ad724d: {Name:mk28427e0abbd66f94a829f1cbc88003e76ac8ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:17.457545  742018 certs.go:381] copying /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/apiserver.crt.f3ad724d -> /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/apiserver.crt
	I1001 19:47:17.457625  742018 certs.go:385] copying /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/apiserver.key.f3ad724d -> /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/apiserver.key
	I1001 19:47:17.457677  742018 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/proxy-client.key
	I1001 19:47:17.457702  742018 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/proxy-client.crt with IP's: []
	I1001 19:47:17.716290  742018 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/proxy-client.crt ...
	I1001 19:47:17.716321  742018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/proxy-client.crt: {Name:mk24303574182fd99c7cacd1d25937093487b815 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:17.716514  742018 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/proxy-client.key ...
	I1001 19:47:17.716529  742018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/proxy-client.key: {Name:mke09e32d4e4bc47fab023165389b868ca19f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:17.716718  742018 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca-key.pem (1675 bytes)
	I1001 19:47:17.716760  742018 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem (1078 bytes)
	I1001 19:47:17.716792  742018 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/cert.pem (1123 bytes)
	I1001 19:47:17.716825  742018 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/key.pem (1679 bytes)
	I1001 19:47:17.717438  742018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 19:47:17.742255  742018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1001 19:47:17.765905  742018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 19:47:17.789082  742018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 19:47:17.812550  742018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1001 19:47:17.836483  742018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 19:47:17.860828  742018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 19:47:17.885538  742018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 19:47:17.909760  742018 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 19:47:17.933845  742018 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 19:47:17.951628  742018 ssh_runner.go:195] Run: openssl version
	I1001 19:47:17.957250  742018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 19:47:17.966512  742018 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:47:17.969987  742018 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:47:17.970092  742018 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 19:47:17.976853  742018 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 19:47:17.986095  742018 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 19:47:17.989336  742018 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 19:47:17.989429  742018 kubeadm.go:392] StartCluster: {Name:addons-164127 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-164127 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:47:17.989526  742018 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1001 19:47:17.989583  742018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 19:47:18.029590  742018 cri.go:89] found id: ""
	I1001 19:47:18.029661  742018 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 19:47:18.038690  742018 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 19:47:18.047699  742018 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1001 19:47:18.047767  742018 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 19:47:18.056978  742018 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 19:47:18.057000  742018 kubeadm.go:157] found existing configuration files:
	
	I1001 19:47:18.057052  742018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 19:47:18.066038  742018 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 19:47:18.066145  742018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 19:47:18.074740  742018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 19:47:18.083391  742018 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 19:47:18.083457  742018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 19:47:18.091603  742018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 19:47:18.100410  742018 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 19:47:18.100522  742018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 19:47:18.108622  742018 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 19:47:18.117599  742018 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 19:47:18.117666  742018 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 19:47:18.125713  742018 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1001 19:47:18.162712  742018 kubeadm.go:310] W1001 19:47:18.161839    1016 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 19:47:18.163660  742018 kubeadm.go:310] W1001 19:47:18.163086    1016 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1001 19:47:18.183625  742018 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1001 19:47:18.245399  742018 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1001 19:47:33.514952  742018 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 19:47:33.515011  742018 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 19:47:33.515098  742018 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1001 19:47:33.515152  742018 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1001 19:47:33.515187  742018 kubeadm.go:310] OS: Linux
	I1001 19:47:33.515232  742018 kubeadm.go:310] CGROUPS_CPU: enabled
	I1001 19:47:33.515280  742018 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1001 19:47:33.515326  742018 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1001 19:47:33.515375  742018 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1001 19:47:33.515422  742018 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1001 19:47:33.515471  742018 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1001 19:47:33.515517  742018 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1001 19:47:33.515564  742018 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1001 19:47:33.515610  742018 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1001 19:47:33.515681  742018 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 19:47:33.515787  742018 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 19:47:33.515876  742018 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 19:47:33.515938  742018 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 19:47:33.518370  742018 out.go:235]   - Generating certificates and keys ...
	I1001 19:47:33.518464  742018 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 19:47:33.518533  742018 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 19:47:33.518603  742018 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 19:47:33.518662  742018 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 19:47:33.518730  742018 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 19:47:33.518784  742018 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 19:47:33.518843  742018 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 19:47:33.518963  742018 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-164127 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 19:47:33.519018  742018 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 19:47:33.519134  742018 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-164127 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1001 19:47:33.519202  742018 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 19:47:33.519274  742018 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 19:47:33.519321  742018 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 19:47:33.519378  742018 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 19:47:33.519432  742018 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 19:47:33.519492  742018 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 19:47:33.519550  742018 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 19:47:33.519615  742018 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 19:47:33.519673  742018 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 19:47:33.519757  742018 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 19:47:33.519825  742018 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 19:47:33.521490  742018 out.go:235]   - Booting up control plane ...
	I1001 19:47:33.521597  742018 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 19:47:33.521686  742018 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 19:47:33.521773  742018 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 19:47:33.521882  742018 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 19:47:33.521975  742018 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 19:47:33.522018  742018 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 19:47:33.522152  742018 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 19:47:33.522289  742018 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 19:47:33.522373  742018 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.507268912s
	I1001 19:47:33.522447  742018 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1001 19:47:33.522504  742018 kubeadm.go:310] [api-check] The API server is healthy after 6.002053404s
	I1001 19:47:33.522613  742018 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1001 19:47:33.522746  742018 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1001 19:47:33.522823  742018 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1001 19:47:33.523030  742018 kubeadm.go:310] [mark-control-plane] Marking the node addons-164127 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1001 19:47:33.523096  742018 kubeadm.go:310] [bootstrap-token] Using token: oe9gy7.36du9a037a16hmzd
	I1001 19:47:33.526420  742018 out.go:235]   - Configuring RBAC rules ...
	I1001 19:47:33.526528  742018 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1001 19:47:33.526624  742018 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1001 19:47:33.526803  742018 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1001 19:47:33.526971  742018 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1001 19:47:33.527099  742018 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1001 19:47:33.527188  742018 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1001 19:47:33.527304  742018 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1001 19:47:33.527350  742018 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1001 19:47:33.527406  742018 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1001 19:47:33.527413  742018 kubeadm.go:310] 
	I1001 19:47:33.527472  742018 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1001 19:47:33.527482  742018 kubeadm.go:310] 
	I1001 19:47:33.527558  742018 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1001 19:47:33.527566  742018 kubeadm.go:310] 
	I1001 19:47:33.527591  742018 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1001 19:47:33.527653  742018 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1001 19:47:33.527709  742018 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1001 19:47:33.527716  742018 kubeadm.go:310] 
	I1001 19:47:33.527770  742018 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1001 19:47:33.527778  742018 kubeadm.go:310] 
	I1001 19:47:33.527825  742018 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1001 19:47:33.527832  742018 kubeadm.go:310] 
	I1001 19:47:33.527885  742018 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1001 19:47:33.527962  742018 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1001 19:47:33.528033  742018 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1001 19:47:33.528041  742018 kubeadm.go:310] 
	I1001 19:47:33.528124  742018 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1001 19:47:33.528203  742018 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1001 19:47:33.528211  742018 kubeadm.go:310] 
	I1001 19:47:33.528293  742018 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token oe9gy7.36du9a037a16hmzd \
	I1001 19:47:33.528403  742018 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:df0ded2b47976dc326b7fe36508ac04474a63bde204f0f47be291484525eea8a \
	I1001 19:47:33.528427  742018 kubeadm.go:310] 	--control-plane 
	I1001 19:47:33.528434  742018 kubeadm.go:310] 
	I1001 19:47:33.528530  742018 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1001 19:47:33.528538  742018 kubeadm.go:310] 
	I1001 19:47:33.528620  742018 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token oe9gy7.36du9a037a16hmzd \
	I1001 19:47:33.528737  742018 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:df0ded2b47976dc326b7fe36508ac04474a63bde204f0f47be291484525eea8a 
	I1001 19:47:33.528749  742018 cni.go:84] Creating CNI manager for ""
	I1001 19:47:33.528756  742018 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 19:47:33.530779  742018 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1001 19:47:33.532690  742018 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1001 19:47:33.536438  742018 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1001 19:47:33.536480  742018 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1001 19:47:33.554737  742018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1001 19:47:33.820127  742018 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1001 19:47:33.820253  742018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:47:33.820264  742018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-164127 minikube.k8s.io/updated_at=2024_10_01T19_47_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4 minikube.k8s.io/name=addons-164127 minikube.k8s.io/primary=true
	I1001 19:47:34.020745  742018 ops.go:34] apiserver oom_adj: -16
	I1001 19:47:34.020902  742018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:47:34.521273  742018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:47:35.021100  742018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:47:35.521661  742018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:47:36.021480  742018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:47:36.521314  742018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:47:37.021682  742018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:47:37.521635  742018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:47:38.021618  742018 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1001 19:47:38.112934  742018 kubeadm.go:1113] duration metric: took 4.292746532s to wait for elevateKubeSystemPrivileges
	I1001 19:47:38.112969  742018 kubeadm.go:394] duration metric: took 20.12354455s to StartCluster
	I1001 19:47:38.112987  742018 settings.go:142] acquiring lock: {Name:mk46877febca9f587b39958e976b5a1299db9afa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:38.113120  742018 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-735883/kubeconfig
	I1001 19:47:38.113504  742018 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/kubeconfig: {Name:mk16c47fd3084557c83466477611ca0e739aa58e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 19:47:38.114271  742018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1001 19:47:38.114297  742018 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1001 19:47:38.114528  742018 config.go:182] Loaded profile config "addons-164127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 19:47:38.114570  742018 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1001 19:47:38.114652  742018 addons.go:69] Setting yakd=true in profile "addons-164127"
	I1001 19:47:38.114671  742018 addons.go:234] Setting addon yakd=true in "addons-164127"
	I1001 19:47:38.114696  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.115176  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.115415  742018 addons.go:69] Setting inspektor-gadget=true in profile "addons-164127"
	I1001 19:47:38.115434  742018 addons.go:234] Setting addon inspektor-gadget=true in "addons-164127"
	I1001 19:47:38.115456  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.115905  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.116424  742018 addons.go:69] Setting cloud-spanner=true in profile "addons-164127"
	I1001 19:47:38.116471  742018 addons.go:234] Setting addon cloud-spanner=true in "addons-164127"
	I1001 19:47:38.116497  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.116907  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.118272  742018 addons.go:69] Setting metrics-server=true in profile "addons-164127"
	I1001 19:47:38.118519  742018 addons.go:234] Setting addon metrics-server=true in "addons-164127"
	I1001 19:47:38.118570  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.120159  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.118422  742018 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-164127"
	I1001 19:47:38.118435  742018 addons.go:69] Setting registry=true in profile "addons-164127"
	I1001 19:47:38.118441  742018 addons.go:69] Setting storage-provisioner=true in profile "addons-164127"
	I1001 19:47:38.118448  742018 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-164127"
	I1001 19:47:38.118458  742018 addons.go:69] Setting volcano=true in profile "addons-164127"
	I1001 19:47:38.118464  742018 addons.go:69] Setting volumesnapshots=true in profile "addons-164127"
	I1001 19:47:38.118696  742018 out.go:177] * Verifying Kubernetes components...
	I1001 19:47:38.118760  742018 addons.go:69] Setting ingress=true in profile "addons-164127"
	I1001 19:47:38.118766  742018 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-164127"
	I1001 19:47:38.118770  742018 addons.go:69] Setting default-storageclass=true in profile "addons-164127"
	I1001 19:47:38.118774  742018 addons.go:69] Setting gcp-auth=true in profile "addons-164127"
	I1001 19:47:38.118789  742018 addons.go:69] Setting ingress-dns=true in profile "addons-164127"
	I1001 19:47:38.120746  742018 addons.go:234] Setting addon ingress-dns=true in "addons-164127"
	I1001 19:47:38.120803  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.121389  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.128820  742018 addons.go:234] Setting addon volcano=true in "addons-164127"
	I1001 19:47:38.128921  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.129424  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.147853  742018 addons.go:234] Setting addon volumesnapshots=true in "addons-164127"
	I1001 19:47:38.147933  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.148490  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.163371  742018 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-164127"
	I1001 19:47:38.163448  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.163941  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.164547  742018 addons.go:234] Setting addon ingress=true in "addons-164127"
	I1001 19:47:38.164599  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.165036  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.173773  742018 addons.go:234] Setting addon registry=true in "addons-164127"
	I1001 19:47:38.173832  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.174312  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.182082  742018 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-164127"
	I1001 19:47:38.182132  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.182602  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.200861  742018 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-164127"
	I1001 19:47:38.201218  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.201763  742018 addons.go:234] Setting addon storage-provisioner=true in "addons-164127"
	I1001 19:47:38.201811  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.202239  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.218271  742018 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-164127"
	I1001 19:47:38.218659  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.218808  742018 mustload.go:65] Loading cluster: addons-164127
	I1001 19:47:38.218970  742018 config.go:182] Loaded profile config "addons-164127": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 19:47:38.219176  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.297580  742018 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I1001 19:47:38.297640  742018 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1001 19:47:38.298966  742018 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1001 19:47:38.298997  742018 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1001 19:47:38.299070  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.299353  742018 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1001 19:47:38.299379  742018 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1001 19:47:38.299450  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.317500  742018 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 19:47:38.324443  742018 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1001 19:47:38.328724  742018 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 19:47:38.328791  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1001 19:47:38.328886  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.332595  742018 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1001 19:47:38.360374  742018 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I1001 19:47:38.372228  742018 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I1001 19:47:38.399891  742018 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1001 19:47:38.400125  742018 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1001 19:47:38.400159  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1001 19:47:38.400248  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.406461  742018 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1001 19:47:38.406554  742018 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1001 19:47:38.407271  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.439332  742018 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1001 19:47:38.439461  742018 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 19:47:38.443157  742018 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1001 19:47:38.445082  742018 addons.go:234] Setting addon default-storageclass=true in "addons-164127"
	I1001 19:47:38.451423  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.451888  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.445141  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.446102  742018 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-164127"
	I1001 19:47:38.453853  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:38.454357  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:38.473814  742018 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I1001 19:47:38.477773  742018 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1001 19:47:38.477802  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I1001 19:47:38.477870  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.447839  742018 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 19:47:38.480780  742018 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 19:47:38.480862  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.504935  742018 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 19:47:38.504957  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1001 19:47:38.505019  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.514520  742018 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1001 19:47:38.517202  742018 out.go:177]   - Using image docker.io/registry:2.8.3
	I1001 19:47:38.519914  742018 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1001 19:47:38.519935  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1001 19:47:38.520015  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.525186  742018 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 19:47:38.525206  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 19:47:38.525264  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.544803  742018 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1001 19:47:38.548606  742018 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1001 19:47:38.550550  742018 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1001 19:47:38.552633  742018 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1001 19:47:38.554689  742018 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1001 19:47:38.559670  742018 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1001 19:47:38.560667  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.561417  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.563134  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.564626  742018 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 19:47:38.564653  742018 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 19:47:38.564711  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.565432  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.570088  742018 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 19:47:38.571599  742018 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1001 19:47:38.574839  742018 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1001 19:47:38.574938  742018 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 19:47:38.577462  742018 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1001 19:47:38.577645  742018 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1001 19:47:38.581232  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.581862  742018 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1001 19:47:38.581882  742018 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1001 19:47:38.582016  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.583691  742018 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 19:47:38.583811  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1001 19:47:38.583993  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.633677  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.638632  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.687503  742018 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1001 19:47:38.689826  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.693358  742018 out.go:177]   - Using image docker.io/busybox:stable
	I1001 19:47:38.694818  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.698585  742018 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 19:47:38.698612  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1001 19:47:38.698676  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:38.710552  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.724358  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.739095  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.748445  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.759808  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:38.816805  742018 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 19:47:39.169841  742018 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 19:47:39.169868  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1001 19:47:39.311845  742018 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1001 19:47:39.311919  742018 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1001 19:47:39.354958  742018 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1001 19:47:39.355023  742018 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1001 19:47:39.370719  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1001 19:47:39.377272  742018 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1001 19:47:39.377298  742018 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1001 19:47:39.407014  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1001 19:47:39.412836  742018 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 19:47:39.412863  742018 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 19:47:39.425939  742018 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1001 19:47:39.425966  742018 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1001 19:47:39.489485  742018 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1001 19:47:39.489510  742018 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1001 19:47:39.490387  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1001 19:47:39.504865  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1001 19:47:39.505175  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1001 19:47:39.510368  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 19:47:39.513085  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1001 19:47:39.614328  742018 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1001 19:47:39.614409  742018 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1001 19:47:39.634991  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 19:47:39.660703  742018 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1001 19:47:39.660775  742018 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1001 19:47:39.672321  742018 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1001 19:47:39.672404  742018 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1001 19:47:39.678696  742018 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 19:47:39.678768  742018 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1001 19:47:39.685547  742018 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1001 19:47:39.685614  742018 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1001 19:47:39.766423  742018 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1001 19:47:39.766498  742018 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1001 19:47:39.892596  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 19:47:39.916914  742018 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1001 19:47:39.916992  742018 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1001 19:47:40.037958  742018 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1001 19:47:40.038035  742018 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1001 19:47:40.039625  742018 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1001 19:47:40.039694  742018 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1001 19:47:40.043925  742018 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1001 19:47:40.044000  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1001 19:47:40.084553  742018 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1001 19:47:40.084630  742018 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1001 19:47:40.095111  742018 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1001 19:47:40.095188  742018 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1001 19:47:40.298090  742018 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 19:47:40.298169  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1001 19:47:40.300962  742018 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1001 19:47:40.301024  742018 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1001 19:47:40.304893  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1001 19:47:40.306000  742018 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1001 19:47:40.306046  742018 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1001 19:47:40.392243  742018 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1001 19:47:40.392313  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1001 19:47:40.423235  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 19:47:40.473246  742018 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1001 19:47:40.473318  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1001 19:47:40.481009  742018 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1001 19:47:40.481088  742018 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1001 19:47:40.612539  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1001 19:47:40.666084  742018 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1001 19:47:40.666159  742018 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1001 19:47:40.710247  742018 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.138589198s)
	I1001 19:47:40.710317  742018 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1001 19:47:40.711400  742018 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.894503851s)
	I1001 19:47:40.712152  742018 node_ready.go:35] waiting up to 6m0s for node "addons-164127" to be "Ready" ...
	I1001 19:47:40.720139  742018 node_ready.go:49] node "addons-164127" has status "Ready":"True"
	I1001 19:47:40.720162  742018 node_ready.go:38] duration metric: took 7.858405ms for node "addons-164127" to be "Ready" ...
	I1001 19:47:40.720170  742018 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:47:40.742264  742018 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace to be "Ready" ...
	I1001 19:47:40.742588  742018 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1001 19:47:40.742604  742018 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1001 19:47:41.086803  742018 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1001 19:47:41.086823  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1001 19:47:41.090892  742018 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1001 19:47:41.090916  742018 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1001 19:47:41.214651  742018 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-164127" context rescaled to 1 replicas
	I1001 19:47:41.416019  742018 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1001 19:47:41.416085  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1001 19:47:41.508012  742018 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 19:47:41.508085  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1001 19:47:41.573801  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1001 19:47:41.640536  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.26973262s)
	I1001 19:47:41.953153  742018 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 19:47:41.953180  742018 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1001 19:47:42.431784  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1001 19:47:42.749862  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:47:44.799238  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:47:45.663356  742018 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1001 19:47:45.663494  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:45.699637  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:45.999151  742018 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1001 19:47:46.161135  742018 addons.go:234] Setting addon gcp-auth=true in "addons-164127"
	I1001 19:47:46.161239  742018 host.go:66] Checking if "addons-164127" exists ...
	I1001 19:47:46.161766  742018 cli_runner.go:164] Run: docker container inspect addons-164127 --format={{.State.Status}}
	I1001 19:47:46.188802  742018 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1001 19:47:46.188876  742018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-164127
	I1001 19:47:46.212879  742018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/addons-164127/id_rsa Username:docker}
	I1001 19:47:47.311102  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:47:48.349703  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.942651049s)
	I1001 19:47:48.349907  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.859489737s)
	I1001 19:47:48.350022  742018 addons.go:475] Verifying addon ingress=true in "addons-164127"
	I1001 19:47:48.350123  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.715063548s)
	I1001 19:47:48.350072  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.83968398s)
	I1001 19:47:48.349974  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.845088607s)
	I1001 19:47:48.350109  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.836998724s)
	I1001 19:47:48.349952  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.844760385s)
	I1001 19:47:48.350192  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.457518407s)
	I1001 19:47:48.351057  742018 addons.go:475] Verifying addon metrics-server=true in "addons-164127"
	I1001 19:47:48.350213  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.045266523s)
	I1001 19:47:48.351108  742018 addons.go:475] Verifying addon registry=true in "addons-164127"
	I1001 19:47:48.350286  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.926975283s)
	W1001 19:47:48.351259  742018 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 19:47:48.351296  742018 retry.go:31] will retry after 259.32129ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1001 19:47:48.350314  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.737698773s)
	I1001 19:47:48.350389  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.776523999s)
	I1001 19:47:48.352842  742018 out.go:177] * Verifying ingress addon...
	I1001 19:47:48.354265  742018 out.go:177] * Verifying registry addon...
	I1001 19:47:48.354269  742018 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-164127 service yakd-dashboard -n yakd-dashboard
	
	I1001 19:47:48.356985  742018 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1001 19:47:48.357707  742018 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1001 19:47:48.386764  742018 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1001 19:47:48.386804  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:48.389333  742018 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1001 19:47:48.389389  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1001 19:47:48.438245  742018 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1001 19:47:48.611162  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1001 19:47:48.865708  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:48.870158  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:49.183272  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.75143778s)
	I1001 19:47:49.183348  742018 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-164127"
	I1001 19:47:49.183525  742018 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.994696348s)
	I1001 19:47:49.186675  742018 out.go:177] * Verifying csi-hostpath-driver addon...
	I1001 19:47:49.186790  742018 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1001 19:47:49.190591  742018 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1001 19:47:49.193498  742018 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1001 19:47:49.195719  742018 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1001 19:47:49.195795  742018 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1001 19:47:49.204193  742018 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1001 19:47:49.204273  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:49.322602  742018 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1001 19:47:49.322668  742018 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1001 19:47:49.350457  742018 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 19:47:49.350544  742018 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1001 19:47:49.364895  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:49.365915  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:49.414187  742018 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1001 19:47:49.696747  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:49.749580  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:47:49.863469  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:49.864890  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:50.196051  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:50.196323  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.585078605s)
	I1001 19:47:50.368803  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:50.370594  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:50.480857  742018 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.066634967s)
	I1001 19:47:50.484592  742018 addons.go:475] Verifying addon gcp-auth=true in "addons-164127"
	I1001 19:47:50.488078  742018 out.go:177] * Verifying gcp-auth addon...
	I1001 19:47:50.491786  742018 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1001 19:47:50.497881  742018 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 19:47:50.696538  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:50.862739  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:50.863901  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:51.195944  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:51.363527  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:51.364555  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:51.696274  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:51.752423  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:47:51.865029  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:51.869646  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:52.197860  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:52.363064  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:52.363817  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:52.695766  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:52.863374  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:52.866068  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:53.196654  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:53.363820  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:53.365241  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:53.696526  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:53.864164  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:53.865087  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:54.196191  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:54.261539  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:47:54.364086  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:54.365619  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:54.695084  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:54.906119  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:54.910023  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:55.195829  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:55.361929  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:55.362523  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:55.695746  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:55.862599  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:55.863890  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:56.195573  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:56.362333  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:56.362667  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:56.695104  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:56.748223  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:47:56.862750  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:56.863738  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:57.195169  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:57.361987  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:57.363197  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:57.695793  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:57.861735  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:57.863494  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:58.195389  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:58.361323  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:58.363125  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:58.695573  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:58.862123  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:58.863049  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:59.195901  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:59.248241  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:47:59.361906  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:47:59.362345  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:59.696218  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:47:59.862873  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:47:59.863561  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:00.196211  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:00.361959  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:00.362804  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:00.695331  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:00.863126  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:00.864194  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:01.196149  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:01.248971  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:48:01.362635  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:01.363569  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:01.695937  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:01.866675  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:01.867392  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:02.196412  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:02.363492  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:02.364002  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:02.695956  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:02.863683  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:02.864711  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:03.195753  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:03.361422  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:03.362923  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:03.695501  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:03.748904  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:48:03.862659  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:03.864065  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:04.195692  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:04.362773  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:04.363733  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:04.695003  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:04.861773  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:04.863505  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:05.195467  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:05.361876  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:05.362414  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:05.695700  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:05.863850  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:05.864411  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:06.195732  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:06.247887  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:48:06.361920  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:06.363042  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:06.695162  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:06.862309  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:06.862660  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:07.195508  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:07.370487  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:07.371467  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:07.695103  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:07.861986  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:07.863025  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:08.195377  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:08.248001  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:48:08.362151  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:08.362458  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:08.695047  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:08.863695  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:08.864030  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:09.196136  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:09.362071  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:09.363117  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:09.695059  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:09.863287  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:09.866331  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:10.195271  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:10.248650  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:48:10.361513  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:10.361912  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:10.695600  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:10.861597  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:10.863955  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:11.194861  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:11.362053  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:11.362788  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:11.695883  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:11.863014  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:11.864531  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:12.195084  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:12.249598  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:48:12.361613  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:12.362234  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:12.695799  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:12.862985  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:12.864401  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:13.196729  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:13.362113  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:13.362560  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:13.696021  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:13.865110  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:13.866649  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:14.195942  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:14.361982  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:14.362395  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:14.695718  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:14.749508  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:48:14.863042  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:14.864434  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:15.196163  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:15.362741  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:15.363245  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:15.697594  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:15.863123  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:15.864019  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:16.195730  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:16.361832  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:16.363472  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:16.695722  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:16.862398  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:16.862981  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:17.195819  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:17.252508  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:48:17.363094  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:17.363371  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:17.696075  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:17.861659  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:17.863206  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:18.195676  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:18.363241  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:18.363904  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:18.695750  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:18.862349  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:18.863082  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:19.195616  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:19.362343  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:19.362895  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:19.695809  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:19.748352  742018 pod_ready.go:103] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"False"
	I1001 19:48:19.868086  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:19.868769  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:20.197579  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:20.362882  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:20.363423  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:20.695633  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:20.868986  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:20.869627  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:21.196156  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:21.248686  742018 pod_ready.go:93] pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace has status "Ready":"True"
	I1001 19:48:21.248712  742018 pod_ready.go:82] duration metric: took 40.5064163s for pod "coredns-7c65d6cfc9-rm2vk" in "kube-system" namespace to be "Ready" ...
	I1001 19:48:21.248723  742018 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-s4r65" in "kube-system" namespace to be "Ready" ...
	I1001 19:48:21.250640  742018 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-s4r65" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-s4r65" not found
	I1001 19:48:21.250667  742018 pod_ready.go:82] duration metric: took 1.935661ms for pod "coredns-7c65d6cfc9-s4r65" in "kube-system" namespace to be "Ready" ...
	E1001 19:48:21.250678  742018 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-s4r65" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-s4r65" not found
	I1001 19:48:21.250686  742018 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-164127" in "kube-system" namespace to be "Ready" ...
	I1001 19:48:21.256188  742018 pod_ready.go:93] pod "etcd-addons-164127" in "kube-system" namespace has status "Ready":"True"
	I1001 19:48:21.256248  742018 pod_ready.go:82] duration metric: took 5.553342ms for pod "etcd-addons-164127" in "kube-system" namespace to be "Ready" ...
	I1001 19:48:21.256270  742018 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-164127" in "kube-system" namespace to be "Ready" ...
	I1001 19:48:21.261037  742018 pod_ready.go:93] pod "kube-apiserver-addons-164127" in "kube-system" namespace has status "Ready":"True"
	I1001 19:48:21.261062  742018 pod_ready.go:82] duration metric: took 4.783424ms for pod "kube-apiserver-addons-164127" in "kube-system" namespace to be "Ready" ...
	I1001 19:48:21.261073  742018 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-164127" in "kube-system" namespace to be "Ready" ...
	I1001 19:48:21.265532  742018 pod_ready.go:93] pod "kube-controller-manager-addons-164127" in "kube-system" namespace has status "Ready":"True"
	I1001 19:48:21.265555  742018 pod_ready.go:82] duration metric: took 4.47427ms for pod "kube-controller-manager-addons-164127" in "kube-system" namespace to be "Ready" ...
	I1001 19:48:21.265565  742018 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-knxbs" in "kube-system" namespace to be "Ready" ...
	I1001 19:48:21.362528  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:21.362785  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:21.446667  742018 pod_ready.go:93] pod "kube-proxy-knxbs" in "kube-system" namespace has status "Ready":"True"
	I1001 19:48:21.446691  742018 pod_ready.go:82] duration metric: took 181.118868ms for pod "kube-proxy-knxbs" in "kube-system" namespace to be "Ready" ...
	I1001 19:48:21.446703  742018 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-164127" in "kube-system" namespace to be "Ready" ...
	I1001 19:48:21.695766  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:21.846691  742018 pod_ready.go:93] pod "kube-scheduler-addons-164127" in "kube-system" namespace has status "Ready":"True"
	I1001 19:48:21.846715  742018 pod_ready.go:82] duration metric: took 400.004195ms for pod "kube-scheduler-addons-164127" in "kube-system" namespace to be "Ready" ...
	I1001 19:48:21.846724  742018 pod_ready.go:39] duration metric: took 41.126501247s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 19:48:21.846737  742018 api_server.go:52] waiting for apiserver process to appear ...
	I1001 19:48:21.846799  742018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 19:48:21.863419  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:21.863695  742018 api_server.go:72] duration metric: took 43.749370413s to wait for apiserver process to appear ...
	I1001 19:48:21.864004  742018 api_server.go:88] waiting for apiserver healthz status ...
	I1001 19:48:21.864049  742018 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1001 19:48:21.863981  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:21.872431  742018 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1001 19:48:21.873454  742018 api_server.go:141] control plane version: v1.31.1
	I1001 19:48:21.873515  742018 api_server.go:131] duration metric: took 9.442813ms to wait for apiserver health ...
	I1001 19:48:21.873539  742018 system_pods.go:43] waiting for kube-system pods to appear ...
	I1001 19:48:22.103456  742018 system_pods.go:59] 18 kube-system pods found
	I1001 19:48:22.103501  742018 system_pods.go:61] "coredns-7c65d6cfc9-rm2vk" [a92093c6-a775-47e8-9157-1395eb3502b7] Running
	I1001 19:48:22.103511  742018 system_pods.go:61] "csi-hostpath-attacher-0" [bb35f0c7-6d8e-468c-bf4a-800c5056b19f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 19:48:22.103520  742018 system_pods.go:61] "csi-hostpath-resizer-0" [3a82995f-0610-4e00-99e7-d7ea896857d3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 19:48:22.103529  742018 system_pods.go:61] "csi-hostpathplugin-wqcsj" [34ca040f-d09b-4e46-b484-13e2a8f7b006] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 19:48:22.103534  742018 system_pods.go:61] "etcd-addons-164127" [c1b4326e-a494-47e4-97b4-5dfda3676fac] Running
	I1001 19:48:22.103538  742018 system_pods.go:61] "kindnet-q9pg8" [6cc53f02-89b1-4d75-b66f-c4faef41eb91] Running
	I1001 19:48:22.103543  742018 system_pods.go:61] "kube-apiserver-addons-164127" [7f2ff654-ddf0-497d-ba5e-b1eb66625ce9] Running
	I1001 19:48:22.103548  742018 system_pods.go:61] "kube-controller-manager-addons-164127" [e02179d4-3b6a-4103-8ec0-2ca3297bb239] Running
	I1001 19:48:22.103552  742018 system_pods.go:61] "kube-ingress-dns-minikube" [9873948e-c4b1-424b-83ec-3893b3900252] Running
	I1001 19:48:22.103555  742018 system_pods.go:61] "kube-proxy-knxbs" [a714d5db-8ce3-4434-811b-5059675822e9] Running
	I1001 19:48:22.103559  742018 system_pods.go:61] "kube-scheduler-addons-164127" [7610b7be-c5e3-41b6-a2a0-83d4ad9e1c73] Running
	I1001 19:48:22.103565  742018 system_pods.go:61] "metrics-server-84c5f94fbc-d5z7g" [2cb020f5-d6d4-43bf-b189-8c27fde55bde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 19:48:22.103575  742018 system_pods.go:61] "nvidia-device-plugin-daemonset-79kbd" [324167bb-5d6b-4381-b1b2-61d389cb657d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1001 19:48:22.103582  742018 system_pods.go:61] "registry-66c9cd494c-v9l9x" [74b85b16-903b-4a08-bdb8-9b3c7422ae07] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 19:48:22.103588  742018 system_pods.go:61] "registry-proxy-kzs7s" [be6b07d5-273a-4cd0-897c-5e38dc0e0531] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 19:48:22.103597  742018 system_pods.go:61] "snapshot-controller-56fcc65765-8hx54" [807a5ff1-d6c0-47f4-b416-3170dffcb5da] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 19:48:22.103605  742018 system_pods.go:61] "snapshot-controller-56fcc65765-g2f2f" [722cd0c1-99a0-4199-b0cf-ae3b017690a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 19:48:22.103614  742018 system_pods.go:61] "storage-provisioner" [1853a0d6-4fd3-42f5-b9b9-978e2526ca5d] Running
	I1001 19:48:22.103620  742018 system_pods.go:74] duration metric: took 230.07279ms to wait for pod list to return data ...
	I1001 19:48:22.103629  742018 default_sa.go:34] waiting for default service account to be created ...
	I1001 19:48:22.195793  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:22.246095  742018 default_sa.go:45] found service account: "default"
	I1001 19:48:22.246123  742018 default_sa.go:55] duration metric: took 142.482079ms for default service account to be created ...
	I1001 19:48:22.246134  742018 system_pods.go:116] waiting for k8s-apps to be running ...
	I1001 19:48:22.361799  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:22.364058  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:22.454756  742018 system_pods.go:86] 18 kube-system pods found
	I1001 19:48:22.454848  742018 system_pods.go:89] "coredns-7c65d6cfc9-rm2vk" [a92093c6-a775-47e8-9157-1395eb3502b7] Running
	I1001 19:48:22.454874  742018 system_pods.go:89] "csi-hostpath-attacher-0" [bb35f0c7-6d8e-468c-bf4a-800c5056b19f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1001 19:48:22.454916  742018 system_pods.go:89] "csi-hostpath-resizer-0" [3a82995f-0610-4e00-99e7-d7ea896857d3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1001 19:48:22.454945  742018 system_pods.go:89] "csi-hostpathplugin-wqcsj" [34ca040f-d09b-4e46-b484-13e2a8f7b006] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1001 19:48:22.454964  742018 system_pods.go:89] "etcd-addons-164127" [c1b4326e-a494-47e4-97b4-5dfda3676fac] Running
	I1001 19:48:22.454984  742018 system_pods.go:89] "kindnet-q9pg8" [6cc53f02-89b1-4d75-b66f-c4faef41eb91] Running
	I1001 19:48:22.455015  742018 system_pods.go:89] "kube-apiserver-addons-164127" [7f2ff654-ddf0-497d-ba5e-b1eb66625ce9] Running
	I1001 19:48:22.455038  742018 system_pods.go:89] "kube-controller-manager-addons-164127" [e02179d4-3b6a-4103-8ec0-2ca3297bb239] Running
	I1001 19:48:22.455058  742018 system_pods.go:89] "kube-ingress-dns-minikube" [9873948e-c4b1-424b-83ec-3893b3900252] Running
	I1001 19:48:22.455080  742018 system_pods.go:89] "kube-proxy-knxbs" [a714d5db-8ce3-4434-811b-5059675822e9] Running
	I1001 19:48:22.455112  742018 system_pods.go:89] "kube-scheduler-addons-164127" [7610b7be-c5e3-41b6-a2a0-83d4ad9e1c73] Running
	I1001 19:48:22.455136  742018 system_pods.go:89] "metrics-server-84c5f94fbc-d5z7g" [2cb020f5-d6d4-43bf-b189-8c27fde55bde] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1001 19:48:22.455159  742018 system_pods.go:89] "nvidia-device-plugin-daemonset-79kbd" [324167bb-5d6b-4381-b1b2-61d389cb657d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1001 19:48:22.455182  742018 system_pods.go:89] "registry-66c9cd494c-v9l9x" [74b85b16-903b-4a08-bdb8-9b3c7422ae07] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1001 19:48:22.455216  742018 system_pods.go:89] "registry-proxy-kzs7s" [be6b07d5-273a-4cd0-897c-5e38dc0e0531] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1001 19:48:22.455242  742018 system_pods.go:89] "snapshot-controller-56fcc65765-8hx54" [807a5ff1-d6c0-47f4-b416-3170dffcb5da] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 19:48:22.455262  742018 system_pods.go:89] "snapshot-controller-56fcc65765-g2f2f" [722cd0c1-99a0-4199-b0cf-ae3b017690a9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1001 19:48:22.455281  742018 system_pods.go:89] "storage-provisioner" [1853a0d6-4fd3-42f5-b9b9-978e2526ca5d] Running
	I1001 19:48:22.455317  742018 system_pods.go:126] duration metric: took 209.164113ms to wait for k8s-apps to be running ...
	I1001 19:48:22.455342  742018 system_svc.go:44] waiting for kubelet service to be running ....
	I1001 19:48:22.455425  742018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 19:48:22.468024  742018 system_svc.go:56] duration metric: took 12.672401ms WaitForService to wait for kubelet
	I1001 19:48:22.468051  742018 kubeadm.go:582] duration metric: took 44.353725959s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 19:48:22.468071  742018 node_conditions.go:102] verifying NodePressure condition ...
	I1001 19:48:22.648027  742018 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1001 19:48:22.648062  742018 node_conditions.go:123] node cpu capacity is 2
	I1001 19:48:22.648075  742018 node_conditions.go:105] duration metric: took 179.998641ms to run NodePressure ...
	I1001 19:48:22.648089  742018 start.go:241] waiting for startup goroutines ...
	I1001 19:48:22.695381  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:22.863292  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:22.864081  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:23.196971  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:23.363366  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:23.363901  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:23.695196  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:23.864082  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:23.865535  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:24.196745  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:24.362679  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:24.363460  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:24.694795  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:24.863396  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:24.863800  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:25.195751  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:25.362882  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:25.363426  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:25.696542  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:25.861688  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:25.862620  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:26.195267  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:26.362161  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:26.364606  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:26.696105  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:26.861930  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:26.862835  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:27.195858  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:27.361926  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:27.362945  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:27.695432  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:27.863485  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:27.864146  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:28.196919  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:28.362906  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:28.363452  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:28.696184  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:28.865128  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:28.865813  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:29.195926  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:29.364663  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:29.366144  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:29.698544  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:29.862153  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:29.863606  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:30.196761  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:30.363001  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:30.364024  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:30.695851  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:30.862930  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:30.863564  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:31.196109  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:31.363008  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:31.364711  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:31.694941  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:31.863322  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:31.864082  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:32.196824  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:32.362960  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:32.363677  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:32.699234  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:32.862883  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:32.863396  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:33.203104  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:33.362547  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:33.363547  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:33.696907  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:33.862756  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:33.863658  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:34.195962  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:34.362991  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:34.363544  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:34.695297  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:34.863708  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:34.864941  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:35.194970  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:35.363291  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:35.365267  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:35.696051  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:35.862524  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:35.863584  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:36.197540  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:36.363071  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:36.364038  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:36.694975  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:36.862144  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:36.863999  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:37.195716  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:37.365043  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:37.366006  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:37.697409  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:37.869927  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:37.872662  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:38.216256  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:38.364075  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:38.365567  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:38.695721  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:38.864034  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:38.866402  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:39.196484  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:39.380177  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:39.381438  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:39.697185  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:39.862591  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:39.863307  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:40.195908  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:40.361831  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:40.363547  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:40.695828  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:40.862354  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:40.863604  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:41.197945  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:41.365830  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:41.368273  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:41.698148  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:41.863440  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1001 19:48:41.864600  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:42.196199  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:42.364165  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:42.366447  742018 kapi.go:107] duration metric: took 54.008735966s to wait for kubernetes.io/minikube-addons=registry ...
	I1001 19:48:42.695535  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:42.862508  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:43.197039  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:43.362786  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:43.695070  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:43.862287  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:44.197000  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:44.362647  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:44.695949  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:44.862746  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:45.196950  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:45.371064  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:45.696002  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:45.862031  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:46.197144  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:46.363902  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:46.697623  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:46.863750  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:47.195107  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:47.363616  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:47.696510  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:47.861679  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:48.197986  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:48.362635  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:48.695376  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:48.862655  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:49.196508  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:49.361537  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:49.696049  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:49.862889  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:50.195768  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:50.362194  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:50.695822  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:50.861523  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:51.195393  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:51.362018  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:51.696501  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:51.861961  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:52.196336  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:52.362365  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:52.696479  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:52.862301  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:53.195283  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:53.362969  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:53.697041  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:53.867074  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:54.197698  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:54.362571  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:54.697722  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:54.862134  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:55.197917  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:55.397551  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:55.696680  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:55.861879  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:56.196276  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:56.362926  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:56.696209  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:56.862481  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:57.212041  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:57.363162  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:57.696921  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:57.862541  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:58.196423  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:58.362583  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:58.696076  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:58.862089  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:59.196905  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:59.365065  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:48:59.695583  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:48:59.862327  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:00.196247  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:49:00.363025  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:00.694981  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:49:00.861967  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:01.195055  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1001 19:49:01.362585  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:01.696100  742018 kapi.go:107] duration metric: took 1m12.505506971s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1001 19:49:01.862236  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:02.362792  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:02.862151  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:03.361947  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:03.862160  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:04.362020  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:04.862187  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:05.362256  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:05.862768  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:06.362831  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:06.862071  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:07.362101  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:07.862585  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:08.362280  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:08.862432  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:09.362605  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:09.862669  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:10.372822  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:10.862249  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:11.362193  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:11.862301  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:12.362438  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:12.591823  742018 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1001 19:49:12.591898  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:12.862866  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:12.994986  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:13.363570  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:13.496894  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:13.862879  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:13.995290  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:14.362868  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:14.495284  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:14.863200  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:14.995311  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:15.362263  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:15.495645  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:15.861894  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:15.995327  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:16.362002  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:16.495092  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:16.862058  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:16.995502  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:17.361969  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:17.495250  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:17.862904  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:17.995108  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:18.362850  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:18.495156  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:18.862252  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:18.995348  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:19.362325  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:19.495352  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:19.861879  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:19.995251  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:20.363115  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:20.495545  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:20.862037  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:20.995070  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:21.362676  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:21.494688  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:21.862499  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:21.995659  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:22.362446  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:22.495307  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:22.862844  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:22.995020  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:23.362308  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:23.495347  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:23.861900  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:23.995554  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:24.361809  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:24.494852  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:24.862892  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:24.995213  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:25.362646  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:25.495892  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:25.861744  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:25.995076  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:26.362682  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:26.496329  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:26.862041  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:26.995150  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:27.362599  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:27.496073  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:27.862050  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:27.995698  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:28.363600  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:28.495237  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:28.861858  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:28.994834  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:29.362781  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:29.494743  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:29.862711  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:29.995806  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:30.361608  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:30.495822  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:30.862835  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:30.994829  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:31.362608  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:31.495598  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:31.862222  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:31.995367  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:32.362498  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:32.496134  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:32.862349  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:32.995684  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:33.362460  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:33.495439  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:33.862797  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:33.994951  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:34.362535  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:34.495828  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:34.861845  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:34.995150  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:35.361742  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:35.494676  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:35.862110  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:35.995323  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:36.361641  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:36.496115  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:36.861536  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:36.995597  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:37.362247  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:37.495283  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:37.863294  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:37.995576  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:38.362482  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:38.499248  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:38.862318  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:38.995532  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:39.362351  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:39.495296  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:39.862264  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:39.995392  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:40.362394  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:40.495802  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:40.862221  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:40.995274  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:41.362633  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:41.495827  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:41.861998  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:41.995048  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:42.362936  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:42.495625  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:42.862619  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:42.995737  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:43.362735  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:43.494793  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:43.862612  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:43.995793  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:44.363153  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:44.495538  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:44.862036  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:44.995585  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:45.362235  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:45.495268  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:45.862908  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:45.995146  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:46.362782  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:46.495417  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:46.861589  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:46.995782  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:47.362119  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:47.495027  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:47.862675  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:47.995799  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:48.361839  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:48.495481  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:48.862538  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:48.995865  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:49.362094  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:49.496213  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:49.862511  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:49.995718  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:50.362442  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:50.496374  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:50.862063  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:50.995178  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:51.361877  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:51.501571  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:51.862214  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:51.995686  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:52.362434  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:52.495664  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:52.862316  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:52.995580  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:53.362121  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:53.496575  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:53.863272  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:53.996645  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:54.362521  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:54.496162  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:54.862218  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:54.995293  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:55.362315  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:55.495897  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:55.862159  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:55.995529  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:56.362995  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:56.495920  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:56.863927  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:56.995107  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:57.361482  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:57.495841  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:57.863126  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:57.995267  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:58.362457  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:58.496230  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:58.862497  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:58.995778  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:59.362425  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:59.495626  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:49:59.862598  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:49:59.995844  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:00.362628  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:50:00.496291  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:00.862673  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:50:00.996223  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:01.361664  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:50:01.495506  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:01.865038  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:50:01.995327  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:02.367067  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:50:02.495663  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:02.862978  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:50:02.995187  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:03.363435  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:50:03.495760  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:03.862709  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:50:03.995118  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:04.362855  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:50:04.495116  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:04.864149  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:50:04.998636  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:05.363980  742018 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1001 19:50:05.495075  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:05.862091  742018 kapi.go:107] duration metric: took 2m17.505104549s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1001 19:50:05.995434  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:06.555699  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:06.995479  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:07.496166  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:07.996051  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:08.495566  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:08.996425  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:09.495664  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:09.995060  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:10.495875  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:10.995740  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:11.495279  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:11.994862  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:12.495996  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:12.995216  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:13.496902  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:13.996161  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:14.495365  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:14.995250  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:15.495250  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:15.995666  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:16.495295  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:16.995797  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:17.495481  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:17.995703  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:18.495669  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:18.996286  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:19.495739  742018 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1001 19:50:19.995574  742018 kapi.go:107] duration metric: took 2m29.503785575s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1001 19:50:19.998111  742018 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-164127 cluster.
	I1001 19:50:20.001029  742018 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1001 19:50:20.003103  742018 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1001 19:50:20.006396  742018 out.go:177] * Enabled addons: ingress-dns, volcano, storage-provisioner, nvidia-device-plugin, cloud-spanner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1001 19:50:20.009009  742018 addons.go:510] duration metric: took 2m41.894428003s for enable addons: enabled=[ingress-dns volcano storage-provisioner nvidia-device-plugin cloud-spanner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1001 19:50:20.009071  742018 start.go:246] waiting for cluster config update ...
	I1001 19:50:20.009094  742018 start.go:255] writing updated cluster config ...
	I1001 19:50:20.010078  742018 ssh_runner.go:195] Run: rm -f paused
	I1001 19:50:20.340696  742018 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1001 19:50:20.342521  742018 out.go:177] * Done! kubectl is now configured to use "addons-164127" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	528ba7582c465       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   9810d05e7a225       gcp-auth-89d5ffd79-8xsdp
	8513d0500466b       289a818c8d9c5       3 minutes ago       Running             controller                               0                   2227871577e8e       ingress-nginx-controller-bc57996ff-6nwlh
	5cd4c8be4cc0b       420193b27261a       4 minutes ago       Exited              patch                                    2                   7040139b542ea       ingress-nginx-admission-patch-czj56
	c8558c7ce9970       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   d357b9917c941       csi-hostpathplugin-wqcsj
	d1dbe7f2a7e1f       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   d357b9917c941       csi-hostpathplugin-wqcsj
	d89a89beb6983       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   d357b9917c941       csi-hostpathplugin-wqcsj
	652052ae44400       08f6b2990811a       4 minutes ago       Running             hostpath                                 0                   d357b9917c941       csi-hostpathplugin-wqcsj
	5297a4662efeb       1a9605c872c1d       4 minutes ago       Running             admission                                0                   ed7ba7725a22a       volcano-admission-5874dfdd79-2pl84
	35a40bcfd2b1a       0107d56dbc0be       4 minutes ago       Running             node-driver-registrar                    0                   d357b9917c941       csi-hostpathplugin-wqcsj
	0e3f71f9857aa       420193b27261a       4 minutes ago       Exited              create                                   0                   712ccbd3e0a70       ingress-nginx-admission-create-q6df2
	10f0ab0e0c83a       6aa88c604f2b4       4 minutes ago       Running             volcano-scheduler                        0                   68eff5c295187       volcano-scheduler-6c9778cbdf-tjdzc
	4e57b3495b967       487fa743e1e22       4 minutes ago       Running             csi-resizer                              0                   f5d9a51c076a9       csi-hostpath-resizer-0
	f2f0253702290       9a80d518f102c       4 minutes ago       Running             csi-attacher                             0                   e4b9f71c5fdca       csi-hostpath-attacher-0
	a832419715d1f       23cbb28ae641a       4 minutes ago       Running             volcano-controllers                      0                   d2a813b5dfe51       volcano-controllers-789ffc5785-ncwd4
	2e6346f15f317       1461903ec4fe9       4 minutes ago       Running             csi-external-health-monitor-controller   0                   d357b9917c941       csi-hostpathplugin-wqcsj
	a3955e97492d0       c9cf76bb104e1       4 minutes ago       Running             registry                                 0                   b1cd2e8d3cbab       registry-66c9cd494c-v9l9x
	9697ee4bf08a4       7ce2150c8929b       4 minutes ago       Running             local-path-provisioner                   0                   2010a1bb6b814       local-path-provisioner-86d989889c-gz9j9
	34bc6859f8c3c       4d1e5c3e97420       4 minutes ago       Running             volume-snapshot-controller               0                   68f4f327325ef       snapshot-controller-56fcc65765-g2f2f
	d67287937884f       4d1e5c3e97420       4 minutes ago       Running             volume-snapshot-controller               0                   1a57d8b16ed71       snapshot-controller-56fcc65765-8hx54
	3d525e3ce17e4       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   86efc027dad5d       metrics-server-84c5f94fbc-d5z7g
	2ff489a2a2db1       77bdba588b953       5 minutes ago       Running             yakd                                     0                   54810f0e30e07       yakd-dashboard-67d98fc6b-5784p
	5b97f6a616fe4       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   84bacbf410cc4       nvidia-device-plugin-daemonset-79kbd
	0a7b999702460       f7ed138f698f6       5 minutes ago       Running             registry-proxy                           0                   063d9cff8de62       registry-proxy-kzs7s
	99d3051e0c585       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   fdb559f2c5a46       cloud-spanner-emulator-5b584cc74-qxsnh
	ad18b58649c31       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   f849902c96b4e       coredns-7c65d6cfc9-rm2vk
	1a3d36251075a       4f725bf50aaa5       5 minutes ago       Running             gadget                                   0                   5e70f2606be97       gadget-m4sff
	c7186d9c86677       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   58a96a4372b1b       kube-ingress-dns-minikube
	e20e8773396de       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   9c2582bbde179       storage-provisioner
	32daf339c1371       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   37644225b70ef       kindnet-q9pg8
	83b9717acb151       24a140c548c07       5 minutes ago       Running             kube-proxy                               0                   c7bf5a659d2a3       kube-proxy-knxbs
	9ab505e2802d3       27e3830e14027       6 minutes ago       Running             etcd                                     0                   ebf7da9aaac35       etcd-addons-164127
	82bef58213f17       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   d74debe363f44       kube-scheduler-addons-164127
	c69aef49f1e9f       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   c4d57bd24ecf3       kube-apiserver-addons-164127
	0207c7ffab113       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   d57b4d0c99e5f       kube-controller-manager-addons-164127
	
	
	==> containerd <==
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.920128645Z" level=info msg="TearDown network for sandbox \"f3d4e54fb6e36491b4751c1203a1c348cb9ecb47b0370e30bec90ef525739a65\" successfully"
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.920175790Z" level=info msg="StopPodSandbox for \"f3d4e54fb6e36491b4751c1203a1c348cb9ecb47b0370e30bec90ef525739a65\" returns successfully"
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.921002611Z" level=info msg="RemovePodSandbox for \"f3d4e54fb6e36491b4751c1203a1c348cb9ecb47b0370e30bec90ef525739a65\""
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.921047738Z" level=info msg="Forcibly stopping sandbox \"f3d4e54fb6e36491b4751c1203a1c348cb9ecb47b0370e30bec90ef525739a65\""
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.928746609Z" level=info msg="TearDown network for sandbox \"f3d4e54fb6e36491b4751c1203a1c348cb9ecb47b0370e30bec90ef525739a65\" successfully"
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.935201529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f3d4e54fb6e36491b4751c1203a1c348cb9ecb47b0370e30bec90ef525739a65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.935323602Z" level=info msg="RemovePodSandbox \"f3d4e54fb6e36491b4751c1203a1c348cb9ecb47b0370e30bec90ef525739a65\" returns successfully"
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.935986430Z" level=info msg="StopPodSandbox for \"e220fa34dc6a12afb2ac15bd8fb042dff4b14657262a8901bc514abfe07273ab\""
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.944189134Z" level=info msg="TearDown network for sandbox \"e220fa34dc6a12afb2ac15bd8fb042dff4b14657262a8901bc514abfe07273ab\" successfully"
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.944228485Z" level=info msg="StopPodSandbox for \"e220fa34dc6a12afb2ac15bd8fb042dff4b14657262a8901bc514abfe07273ab\" returns successfully"
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.944904088Z" level=info msg="RemovePodSandbox for \"e220fa34dc6a12afb2ac15bd8fb042dff4b14657262a8901bc514abfe07273ab\""
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.944958946Z" level=info msg="Forcibly stopping sandbox \"e220fa34dc6a12afb2ac15bd8fb042dff4b14657262a8901bc514abfe07273ab\""
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.952232769Z" level=info msg="TearDown network for sandbox \"e220fa34dc6a12afb2ac15bd8fb042dff4b14657262a8901bc514abfe07273ab\" successfully"
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.958635817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e220fa34dc6a12afb2ac15bd8fb042dff4b14657262a8901bc514abfe07273ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 01 19:50:32 addons-164127 containerd[814]: time="2024-10-01T19:50:32.958782365Z" level=info msg="RemovePodSandbox \"e220fa34dc6a12afb2ac15bd8fb042dff4b14657262a8901bc514abfe07273ab\" returns successfully"
	Oct 01 19:51:32 addons-164127 containerd[814]: time="2024-10-01T19:51:32.963098985Z" level=info msg="RemoveContainer for \"c583344596fcba9f6871b3efa60cd5767e8f0187e1ac99d4b65299d3336bebdc\""
	Oct 01 19:51:32 addons-164127 containerd[814]: time="2024-10-01T19:51:32.973659750Z" level=info msg="RemoveContainer for \"c583344596fcba9f6871b3efa60cd5767e8f0187e1ac99d4b65299d3336bebdc\" returns successfully"
	Oct 01 19:51:32 addons-164127 containerd[814]: time="2024-10-01T19:51:32.975711282Z" level=info msg="StopPodSandbox for \"489414880b685a86c03913ee5740b953978fd4247628e96f7fd91eacf132585e\""
	Oct 01 19:51:32 addons-164127 containerd[814]: time="2024-10-01T19:51:32.984284071Z" level=info msg="TearDown network for sandbox \"489414880b685a86c03913ee5740b953978fd4247628e96f7fd91eacf132585e\" successfully"
	Oct 01 19:51:32 addons-164127 containerd[814]: time="2024-10-01T19:51:32.984923170Z" level=info msg="StopPodSandbox for \"489414880b685a86c03913ee5740b953978fd4247628e96f7fd91eacf132585e\" returns successfully"
	Oct 01 19:51:32 addons-164127 containerd[814]: time="2024-10-01T19:51:32.986826389Z" level=info msg="RemovePodSandbox for \"489414880b685a86c03913ee5740b953978fd4247628e96f7fd91eacf132585e\""
	Oct 01 19:51:32 addons-164127 containerd[814]: time="2024-10-01T19:51:32.986872214Z" level=info msg="Forcibly stopping sandbox \"489414880b685a86c03913ee5740b953978fd4247628e96f7fd91eacf132585e\""
	Oct 01 19:51:32 addons-164127 containerd[814]: time="2024-10-01T19:51:32.995441343Z" level=info msg="TearDown network for sandbox \"489414880b685a86c03913ee5740b953978fd4247628e96f7fd91eacf132585e\" successfully"
	Oct 01 19:51:33 addons-164127 containerd[814]: time="2024-10-01T19:51:33.001813213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"489414880b685a86c03913ee5740b953978fd4247628e96f7fd91eacf132585e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 01 19:51:33 addons-164127 containerd[814]: time="2024-10-01T19:51:33.002072350Z" level=info msg="RemovePodSandbox \"489414880b685a86c03913ee5740b953978fd4247628e96f7fd91eacf132585e\" returns successfully"
	
	
	==> coredns [ad18b58649c31b59e7b315ef19a4dec5bc2952fc74c4c253036a72148a2b6b7c] <==
	[INFO] 10.244.0.2:56909 - 2990 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000055342s
	[INFO] 10.244.0.2:56909 - 3607 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001447442s
	[INFO] 10.244.0.2:56909 - 512 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001529975s
	[INFO] 10.244.0.2:56909 - 10769 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00005819s
	[INFO] 10.244.0.2:56909 - 54047 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000052036s
	[INFO] 10.244.0.2:43050 - 52368 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000120169s
	[INFO] 10.244.0.2:43050 - 52180 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000242472s
	[INFO] 10.244.0.2:48075 - 27554 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000062522s
	[INFO] 10.244.0.2:48075 - 27373 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000049771s
	[INFO] 10.244.0.2:35132 - 24212 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000135316s
	[INFO] 10.244.0.2:35132 - 24376 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108362s
	[INFO] 10.244.0.2:35084 - 10891 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001334082s
	[INFO] 10.244.0.2:35084 - 11071 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001628041s
	[INFO] 10.244.0.2:53991 - 60099 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000077127s
	[INFO] 10.244.0.2:53991 - 59704 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000044857s
	[INFO] 10.244.0.24:54208 - 50406 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000175848s
	[INFO] 10.244.0.24:39126 - 45218 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000125068s
	[INFO] 10.244.0.24:46492 - 17711 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000128277s
	[INFO] 10.244.0.24:34678 - 59526 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00011148s
	[INFO] 10.244.0.24:53484 - 16435 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106459s
	[INFO] 10.244.0.24:54430 - 56946 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000092716s
	[INFO] 10.244.0.24:54183 - 36381 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001897434s
	[INFO] 10.244.0.24:43559 - 30122 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002456363s
	[INFO] 10.244.0.24:59784 - 57716 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001297866s
	[INFO] 10.244.0.24:39134 - 48414 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001585014s
	
	
	==> describe nodes <==
	Name:               addons-164127
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-164127
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=addons-164127
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T19_47_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-164127
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-164127"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 19:47:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-164127
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 19:53:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 19:50:36 +0000   Tue, 01 Oct 2024 19:47:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 19:50:36 +0000   Tue, 01 Oct 2024 19:47:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 19:50:36 +0000   Tue, 01 Oct 2024 19:47:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 19:50:36 +0000   Tue, 01 Oct 2024 19:47:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-164127
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 fd69095162774a899d187d8a2a320a56
	  System UUID:                da4879cd-de88-4126-95b8-88ab617cbb1e
	  Boot ID:                    3aa8f718-8507-41e8-80ca-0eb33f6ce70e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-qxsnh      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  gadget                      gadget-m4sff                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  gcp-auth                    gcp-auth-89d5ffd79-8xsdp                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-6nwlh    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m53s
	  kube-system                 coredns-7c65d6cfc9-rm2vk                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m1s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 csi-hostpathplugin-wqcsj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 etcd-addons-164127                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m6s
	  kube-system                 kindnet-q9pg8                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m1s
	  kube-system                 kube-apiserver-addons-164127                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-controller-manager-addons-164127       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-proxy-knxbs                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 kube-scheduler-addons-164127                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 metrics-server-84c5f94fbc-d5z7g             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m56s
	  kube-system                 nvidia-device-plugin-daemonset-79kbd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-66c9cd494c-v9l9x                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 registry-proxy-kzs7s                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 snapshot-controller-56fcc65765-8hx54        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 snapshot-controller-56fcc65765-g2f2f        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  local-path-storage          local-path-provisioner-86d989889c-gz9j9     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  volcano-system              volcano-admission-5874dfdd79-2pl84          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-controllers-789ffc5785-ncwd4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  volcano-system              volcano-scheduler-6c9778cbdf-tjdzc          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-5784p              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m59s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  6m13s (x8 over 6m13s)  kubelet          Node addons-164127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m13s (x7 over 6m13s)  kubelet          Node addons-164127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m13s (x7 over 6m13s)  kubelet          Node addons-164127 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m13s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m7s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m7s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m6s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m6s                   kubelet          Node addons-164127 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m6s                   kubelet          Node addons-164127 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m6s                   kubelet          Node addons-164127 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m2s                   node-controller  Node addons-164127 event: Registered Node addons-164127 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [9ab505e2802d34dbe6e65250d3010debbca14412bdd72303d984ee48104ec160] <==
	{"level":"info","ts":"2024-10-01T19:47:27.269219Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-01T19:47:27.269466Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-01T19:47:27.269495Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-01T19:47:27.269606Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-01T19:47:27.269645Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-10-01T19:47:27.816485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-01T19:47:27.816721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-01T19:47:27.816817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-10-01T19:47:27.816959Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-10-01T19:47:27.817043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-01T19:47:27.817134Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-10-01T19:47:27.817214Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-01T19:47:27.818887Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:47:27.824699Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-164127 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-01T19:47:27.824988Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:47:27.826057Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:47:27.827260Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-01T19:47:27.827461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-01T19:47:27.827822Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:47:27.850160Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:47:27.836681Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-01T19:47:27.845260Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-01T19:47:27.876379Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-01T19:47:27.879394Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-01T19:47:27.879507Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [528ba7582c46542df8dafb46772bd841afddfd8bc16061c152dd861776e53941] <==
	2024/10/01 19:50:19 GCP Auth Webhook started!
	2024/10/01 19:50:36 Ready to marshal response ...
	2024/10/01 19:50:36 Ready to write response ...
	2024/10/01 19:50:37 Ready to marshal response ...
	2024/10/01 19:50:37 Ready to write response ...
	
	
	==> kernel <==
	 19:53:39 up  3:36,  0 users,  load average: 0.41, 1.29, 1.95
	Linux addons-164127 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [32daf339c1371802ec95e8dfc7abec5747505b4a4895cb4453724f6eae64a800] <==
	I1001 19:51:29.648492       1 main.go:299] handling current node
	I1001 19:51:39.641422       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 19:51:39.641453       1 main.go:299] handling current node
	I1001 19:51:49.647867       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 19:51:49.647904       1 main.go:299] handling current node
	I1001 19:51:59.650478       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 19:51:59.650512       1 main.go:299] handling current node
	I1001 19:52:09.648626       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 19:52:09.648660       1 main.go:299] handling current node
	I1001 19:52:19.641306       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 19:52:19.641344       1 main.go:299] handling current node
	I1001 19:52:29.650329       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 19:52:29.650425       1 main.go:299] handling current node
	I1001 19:52:39.641738       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 19:52:39.641766       1 main.go:299] handling current node
	I1001 19:52:49.648832       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 19:52:49.648875       1 main.go:299] handling current node
	I1001 19:52:59.647997       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 19:52:59.648032       1 main.go:299] handling current node
	I1001 19:53:09.648715       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 19:53:09.648814       1 main.go:299] handling current node
	I1001 19:53:19.647854       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 19:53:19.647889       1 main.go:299] handling current node
	I1001 19:53:29.649886       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I1001 19:53:29.650093       1 main.go:299] handling current node
	
	
	==> kube-apiserver [c69aef49f1e9fc137cd384c8e9ad466ffdc5dba6355676f7cef2b887d6ea0e94] <==
	 > logger="UnhandledError"
	I1001 19:48:45.351459       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1001 19:48:53.471009       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.198.8:443: connect: connection refused
	E1001 19:48:53.471056       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.198.8:443: connect: connection refused" logger="UnhandledError"
	W1001 19:48:53.472851       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.101.11.238:443: connect: connection refused
	W1001 19:48:53.526559       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.198.8:443: connect: connection refused
	E1001 19:48:53.526601       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.198.8:443: connect: connection refused" logger="UnhandledError"
	W1001 19:48:53.528308       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.101.11.238:443: connect: connection refused
	W1001 19:48:53.684168       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.11.238:443: connect: connection refused
	W1001 19:48:54.708417       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.11.238:443: connect: connection refused
	W1001 19:48:55.771742       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.11.238:443: connect: connection refused
	W1001 19:48:56.820954       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.11.238:443: connect: connection refused
	W1001 19:48:57.915533       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.11.238:443: connect: connection refused
	W1001 19:48:58.941776       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.11.238:443: connect: connection refused
	W1001 19:48:59.953163       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.11.238:443: connect: connection refused
	W1001 19:49:00.966448       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.11.238:443: connect: connection refused
	W1001 19:49:02.022411       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.101.11.238:443: connect: connection refused
	W1001 19:49:12.427097       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.198.8:443: connect: connection refused
	E1001 19:49:12.427137       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.198.8:443: connect: connection refused" logger="UnhandledError"
	W1001 19:49:53.481979       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.198.8:443: connect: connection refused
	E1001 19:49:53.482022       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.198.8:443: connect: connection refused" logger="UnhandledError"
	W1001 19:49:53.533865       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.198.8:443: connect: connection refused
	E1001 19:49:53.533904       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.198.8:443: connect: connection refused" logger="UnhandledError"
	I1001 19:50:36.861924       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I1001 19:50:36.900074       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [0207c7ffab113464c36f02b11c024bc11973f262d7b52460c56867c75077b7ba] <==
	I1001 19:49:53.544288       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 19:49:53.550544       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 19:49:53.555017       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 19:49:53.572327       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 19:49:54.522050       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 19:49:54.534135       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 19:49:55.649211       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 19:49:55.676144       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 19:49:56.658494       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 19:49:56.664869       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 19:49:56.671557       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1001 19:49:56.682066       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 19:49:56.692145       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 19:49:56.698047       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1001 19:50:05.563321       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="80.129µs"
	I1001 19:50:19.612189       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="9.544885ms"
	I1001 19:50:19.612525       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="34.887µs"
	I1001 19:50:20.867442       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="11.489178ms"
	I1001 19:50:20.868574       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="54.653µs"
	I1001 19:50:26.020841       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1001 19:50:26.023671       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1001 19:50:26.076740       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1001 19:50:26.080600       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1001 19:50:36.575291       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I1001 19:50:36.954339       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-164127"
	
	
	==> kube-proxy [83b9717acb1516cde87ae3f4bf58d33db2f552b03296e9d6f79a64cda4b53162] <==
	I1001 19:47:39.377953       1 server_linux.go:66] "Using iptables proxy"
	I1001 19:47:39.469920       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1001 19:47:39.469990       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1001 19:47:39.554431       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1001 19:47:39.554499       1 server_linux.go:169] "Using iptables Proxier"
	I1001 19:47:39.557918       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1001 19:47:39.561136       1 server.go:483] "Version info" version="v1.31.1"
	I1001 19:47:39.561173       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1001 19:47:39.565341       1 config.go:328] "Starting node config controller"
	I1001 19:47:39.565366       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1001 19:47:39.565872       1 config.go:199] "Starting service config controller"
	I1001 19:47:39.565882       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1001 19:47:39.565918       1 config.go:105] "Starting endpoint slice config controller"
	I1001 19:47:39.565922       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1001 19:47:39.666077       1 shared_informer.go:320] Caches are synced for node config
	I1001 19:47:39.666118       1 shared_informer.go:320] Caches are synced for service config
	I1001 19:47:39.666168       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [82bef58213f1783579466a07cd0e35fe85e45970caf74b9acc5a74a5b95593a8] <==
	W1001 19:47:31.163933       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 19:47:31.163967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:47:31.164227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 19:47:31.164253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:47:31.164544       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1001 19:47:31.164713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:47:31.164802       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 19:47:31.164854       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1001 19:47:31.164560       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1001 19:47:31.164655       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 19:47:31.165151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1001 19:47:31.165245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1001 19:47:31.167152       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 19:47:31.167326       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:47:31.167184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 19:47:31.167557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1001 19:47:31.167260       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 19:47:31.167766       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1001 19:47:31.168196       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 19:47:31.168351       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:47:31.168578       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1001 19:47:31.168618       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1001 19:47:31.168582       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1001 19:47:31.168652       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1001 19:47:32.258135       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 01 19:49:55 addons-164127 kubelet[1470]: I1001 19:49:55.756502    1470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvn5q\" (UniqueName: \"kubernetes.io/projected/881258c6-f605-4652-902a-6ff13e2b651d-kube-api-access-hvn5q\") pod \"881258c6-f605-4652-902a-6ff13e2b651d\" (UID: \"881258c6-f605-4652-902a-6ff13e2b651d\") "
	Oct 01 19:49:55 addons-164127 kubelet[1470]: I1001 19:49:55.756598    1470 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd4vh\" (UniqueName: \"kubernetes.io/projected/ab1722c2-4f0c-49ba-a408-3ccc1f7267e5-kube-api-access-hd4vh\") pod \"ab1722c2-4f0c-49ba-a408-3ccc1f7267e5\" (UID: \"ab1722c2-4f0c-49ba-a408-3ccc1f7267e5\") "
	Oct 01 19:49:55 addons-164127 kubelet[1470]: I1001 19:49:55.758923    1470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab1722c2-4f0c-49ba-a408-3ccc1f7267e5-kube-api-access-hd4vh" (OuterVolumeSpecName: "kube-api-access-hd4vh") pod "ab1722c2-4f0c-49ba-a408-3ccc1f7267e5" (UID: "ab1722c2-4f0c-49ba-a408-3ccc1f7267e5"). InnerVolumeSpecName "kube-api-access-hd4vh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 01 19:49:55 addons-164127 kubelet[1470]: I1001 19:49:55.759260    1470 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/881258c6-f605-4652-902a-6ff13e2b651d-kube-api-access-hvn5q" (OuterVolumeSpecName: "kube-api-access-hvn5q") pod "881258c6-f605-4652-902a-6ff13e2b651d" (UID: "881258c6-f605-4652-902a-6ff13e2b651d"). InnerVolumeSpecName "kube-api-access-hvn5q". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 01 19:49:55 addons-164127 kubelet[1470]: I1001 19:49:55.835986    1470 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kzs7s" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 19:49:55 addons-164127 kubelet[1470]: I1001 19:49:55.858321    1470 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hvn5q\" (UniqueName: \"kubernetes.io/projected/881258c6-f605-4652-902a-6ff13e2b651d-kube-api-access-hvn5q\") on node \"addons-164127\" DevicePath \"\""
	Oct 01 19:49:55 addons-164127 kubelet[1470]: I1001 19:49:55.858513    1470 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hd4vh\" (UniqueName: \"kubernetes.io/projected/ab1722c2-4f0c-49ba-a408-3ccc1f7267e5-kube-api-access-hd4vh\") on node \"addons-164127\" DevicePath \"\""
	Oct 01 19:49:56 addons-164127 kubelet[1470]: I1001 19:49:56.517719    1470 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e220fa34dc6a12afb2ac15bd8fb042dff4b14657262a8901bc514abfe07273ab"
	Oct 01 19:49:56 addons-164127 kubelet[1470]: I1001 19:49:56.522330    1470 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3d4e54fb6e36491b4751c1203a1c348cb9ecb47b0370e30bec90ef525739a65"
	Oct 01 19:49:58 addons-164127 kubelet[1470]: I1001 19:49:58.835537    1470 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-v9l9x" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 19:50:19 addons-164127 kubelet[1470]: I1001 19:50:19.600055    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-6nwlh" podStartSLOduration=149.84717225 podStartE2EDuration="2m33.600016306s" podCreationTimestamp="2024-10-01 19:47:46 +0000 UTC" firstStartedPulling="2024-10-01 19:50:00.981963236 +0000 UTC m=+148.252069424" lastFinishedPulling="2024-10-01 19:50:04.734807301 +0000 UTC m=+152.004913480" observedRunningTime="2024-10-01 19:50:05.564783931 +0000 UTC m=+152.834890119" watchObservedRunningTime="2024-10-01 19:50:19.600016306 +0000 UTC m=+166.870122495"
	Oct 01 19:50:20 addons-164127 kubelet[1470]: I1001 19:50:20.853817    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-89d5ffd79-8xsdp" podStartSLOduration=66.130263199 podStartE2EDuration="1m8.853797751s" podCreationTimestamp="2024-10-01 19:49:12 +0000 UTC" firstStartedPulling="2024-10-01 19:50:16.456828096 +0000 UTC m=+163.726934275" lastFinishedPulling="2024-10-01 19:50:19.180362647 +0000 UTC m=+166.450468827" observedRunningTime="2024-10-01 19:50:19.601343079 +0000 UTC m=+166.871449283" watchObservedRunningTime="2024-10-01 19:50:20.853797751 +0000 UTC m=+168.123903939"
	Oct 01 19:50:26 addons-164127 kubelet[1470]: I1001 19:50:26.839476    1470 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="881258c6-f605-4652-902a-6ff13e2b651d" path="/var/lib/kubelet/pods/881258c6-f605-4652-902a-6ff13e2b651d/volumes"
	Oct 01 19:50:26 addons-164127 kubelet[1470]: I1001 19:50:26.840879    1470 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab1722c2-4f0c-49ba-a408-3ccc1f7267e5" path="/var/lib/kubelet/pods/ab1722c2-4f0c-49ba-a408-3ccc1f7267e5/volumes"
	Oct 01 19:50:32 addons-164127 kubelet[1470]: I1001 19:50:32.893829    1470 scope.go:117] "RemoveContainer" containerID="66ffa85c2134a516592b1676fd6fde7726bae3610b451bf6fa76a60e07456b61"
	Oct 01 19:50:32 addons-164127 kubelet[1470]: I1001 19:50:32.902680    1470 scope.go:117] "RemoveContainer" containerID="c185be33043acf09c84e20fd0d2b451a2aa3953a9a92db2277c5222e7910007a"
	Oct 01 19:50:36 addons-164127 kubelet[1470]: I1001 19:50:36.840321    1470 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddf28bd4-d38d-4c24-9166-30bbb6ebad27" path="/var/lib/kubelet/pods/ddf28bd4-d38d-4c24-9166-30bbb6ebad27/volumes"
	Oct 01 19:50:53 addons-164127 kubelet[1470]: I1001 19:50:53.835857    1470 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-79kbd" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 19:51:18 addons-164127 kubelet[1470]: I1001 19:51:18.835852    1470 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kzs7s" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 19:51:18 addons-164127 kubelet[1470]: I1001 19:51:18.837235    1470 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-v9l9x" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 19:51:32 addons-164127 kubelet[1470]: I1001 19:51:32.961390    1470 scope.go:117] "RemoveContainer" containerID="c583344596fcba9f6871b3efa60cd5767e8f0187e1ac99d4b65299d3336bebdc"
	Oct 01 19:52:04 addons-164127 kubelet[1470]: I1001 19:52:04.835744    1470 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-79kbd" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 19:52:27 addons-164127 kubelet[1470]: I1001 19:52:27.835665    1470 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kzs7s" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 19:52:32 addons-164127 kubelet[1470]: I1001 19:52:32.836815    1470 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-v9l9x" secret="" err="secret \"gcp-auth\" not found"
	Oct 01 19:53:27 addons-164127 kubelet[1470]: I1001 19:53:27.835945    1470 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-79kbd" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [e20e8773396de99db60827270b692a27f2f704d9f611e65fafff319cc23a485d] <==
	I1001 19:47:43.631066       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 19:47:43.755390       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 19:47:43.755476       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 19:47:43.777013       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 19:47:43.777387       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f309d8b2-d632-45f9-8249-4d4f18e16ae9", APIVersion:"v1", ResourceVersion:"578", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-164127_036d7608-931d-405f-998c-0a50a455f725 became leader
	I1001 19:47:43.777495       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-164127_036d7608-931d-405f-998c-0a50a455f725!
	I1001 19:47:43.878295       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-164127_036d7608-931d-405f-998c-0a50a455f725!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-164127 -n addons-164127
helpers_test.go:261: (dbg) Run:  kubectl --context addons-164127 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-q6df2 ingress-nginx-admission-patch-czj56 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-164127 describe pod ingress-nginx-admission-create-q6df2 ingress-nginx-admission-patch-czj56 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-164127 describe pod ingress-nginx-admission-create-q6df2 ingress-nginx-admission-patch-czj56 test-job-nginx-0: exit status 1 (89.204426ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-q6df2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-czj56" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-164127 describe pod ingress-nginx-admission-create-q6df2 ingress-nginx-admission-patch-czj56 test-job-nginx-0: exit status 1
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 addons disable volcano --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-164127 addons disable volcano --alsologtostderr -v=1: (11.159835961s)
--- FAIL: TestAddons/serial/Volcano (210.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (382.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-992970 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1001 20:35:20.382975  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-992970 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m17.989200374s)

                                                
                                                
-- stdout --
	* [old-k8s-version-992970] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-992970" primary control-plane node in "old-k8s-version-992970" cluster
	* Pulling base image v0.0.45-1727731891-master ...
	* Restarting existing docker container for "old-k8s-version-992970" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-992970 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:35:18.157355  945840 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:35:18.157545  945840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:35:18.157572  945840 out.go:358] Setting ErrFile to fd 2...
	I1001 20:35:18.157595  945840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:35:18.157848  945840 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
	I1001 20:35:18.158235  945840 out.go:352] Setting JSON to false
	I1001 20:35:18.159191  945840 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15466,"bootTime":1727799453,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 20:35:18.159288  945840 start.go:139] virtualization:  
	I1001 20:35:18.161630  945840 out.go:177] * [old-k8s-version-992970] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1001 20:35:18.164040  945840 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:35:18.164114  945840 notify.go:220] Checking for updates...
	I1001 20:35:18.167607  945840 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:35:18.169597  945840 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	I1001 20:35:18.171600  945840 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	I1001 20:35:18.173614  945840 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 20:35:18.175714  945840 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:35:18.178456  945840 config.go:182] Loaded profile config "old-k8s-version-992970": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1001 20:35:18.180864  945840 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1001 20:35:18.182812  945840 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:35:18.219064  945840 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 20:35:18.219177  945840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 20:35:18.288197  945840 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-01 20:35:18.279118396 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 20:35:18.288302  945840 docker.go:318] overlay module found
	I1001 20:35:18.290984  945840 out.go:177] * Using the docker driver based on existing profile
	I1001 20:35:18.292926  945840 start.go:297] selected driver: docker
	I1001 20:35:18.292942  945840 start.go:901] validating driver "docker" against &{Name:old-k8s-version-992970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-992970 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:35:18.293068  945840 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:35:18.293674  945840 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 20:35:18.363816  945840 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-01 20:35:18.354119291 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 20:35:18.364208  945840 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:35:18.364236  945840 cni.go:84] Creating CNI manager for ""
	I1001 20:35:18.364280  945840 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 20:35:18.364322  945840 start.go:340] cluster config:
	{Name:old-k8s-version-992970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-992970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:35:18.366848  945840 out.go:177] * Starting "old-k8s-version-992970" primary control-plane node in "old-k8s-version-992970" cluster
	I1001 20:35:18.369072  945840 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1001 20:35:18.371451  945840 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1001 20:35:18.373384  945840 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1001 20:35:18.373439  945840 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1001 20:35:18.373451  945840 cache.go:56] Caching tarball of preloaded images
	I1001 20:35:18.373530  945840 preload.go:172] Found /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 20:35:18.373562  945840 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1001 20:35:18.373672  945840 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/config.json ...
	I1001 20:35:18.373884  945840 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 20:35:18.396708  945840 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1001 20:35:18.396731  945840 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1001 20:35:18.396745  945840 cache.go:194] Successfully downloaded all kic artifacts
	I1001 20:35:18.396774  945840 start.go:360] acquireMachinesLock for old-k8s-version-992970: {Name:mk84c124b9e0cd0ec151f943e16b2cdc54707b8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:35:18.396832  945840 start.go:364] duration metric: took 35.741µs to acquireMachinesLock for "old-k8s-version-992970"
	I1001 20:35:18.396857  945840 start.go:96] Skipping create...Using existing machine configuration
	I1001 20:35:18.396866  945840 fix.go:54] fixHost starting: 
	I1001 20:35:18.397137  945840 cli_runner.go:164] Run: docker container inspect old-k8s-version-992970 --format={{.State.Status}}
	I1001 20:35:18.421236  945840 fix.go:112] recreateIfNeeded on old-k8s-version-992970: state=Stopped err=<nil>
	W1001 20:35:18.421268  945840 fix.go:138] unexpected machine state, will restart: <nil>
	I1001 20:35:18.423807  945840 out.go:177] * Restarting existing docker container for "old-k8s-version-992970" ...
	I1001 20:35:18.425396  945840 cli_runner.go:164] Run: docker start old-k8s-version-992970
	I1001 20:35:18.759718  945840 cli_runner.go:164] Run: docker container inspect old-k8s-version-992970 --format={{.State.Status}}
	I1001 20:35:18.784010  945840 kic.go:430] container "old-k8s-version-992970" state is running.
	I1001 20:35:18.784539  945840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-992970
	I1001 20:35:18.811645  945840 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/config.json ...
	I1001 20:35:18.811865  945840 machine.go:93] provisionDockerMachine start ...
	I1001 20:35:18.811928  945840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992970
	I1001 20:35:18.837751  945840 main.go:141] libmachine: Using SSH client type: native
	I1001 20:35:18.840777  945840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1001 20:35:18.840800  945840 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:35:18.841483  945840 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32928->127.0.0.1:33829: read: connection reset by peer
	I1001 20:35:21.984467  945840 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-992970
	
	I1001 20:35:21.984490  945840 ubuntu.go:169] provisioning hostname "old-k8s-version-992970"
	I1001 20:35:21.984561  945840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992970
	I1001 20:35:22.007227  945840 main.go:141] libmachine: Using SSH client type: native
	I1001 20:35:22.007478  945840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1001 20:35:22.007491  945840 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-992970 && echo "old-k8s-version-992970" | sudo tee /etc/hostname
	I1001 20:35:22.157544  945840 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-992970
	
	I1001 20:35:22.157624  945840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992970
	I1001 20:35:22.182078  945840 main.go:141] libmachine: Using SSH client type: native
	I1001 20:35:22.182365  945840 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33829 <nil> <nil>}
	I1001 20:35:22.182390  945840 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-992970' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-992970/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-992970' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:35:22.320747  945840 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:35:22.320817  945840 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19736-735883/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-735883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-735883/.minikube}
	I1001 20:35:22.320892  945840 ubuntu.go:177] setting up certificates
	I1001 20:35:22.320919  945840 provision.go:84] configureAuth start
	I1001 20:35:22.321007  945840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-992970
	I1001 20:35:22.341370  945840 provision.go:143] copyHostCerts
	I1001 20:35:22.341431  945840 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-735883/.minikube/ca.pem, removing ...
	I1001 20:35:22.341446  945840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-735883/.minikube/ca.pem
	I1001 20:35:22.341519  945840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-735883/.minikube/ca.pem (1078 bytes)
	I1001 20:35:22.341625  945840 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-735883/.minikube/cert.pem, removing ...
	I1001 20:35:22.341631  945840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-735883/.minikube/cert.pem
	I1001 20:35:22.341657  945840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-735883/.minikube/cert.pem (1123 bytes)
	I1001 20:35:22.341713  945840 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-735883/.minikube/key.pem, removing ...
	I1001 20:35:22.341717  945840 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-735883/.minikube/key.pem
	I1001 20:35:22.341740  945840 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-735883/.minikube/key.pem (1679 bytes)
	I1001 20:35:22.341791  945840 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-735883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-992970 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-992970]
	I1001 20:35:23.240660  945840 provision.go:177] copyRemoteCerts
	I1001 20:35:23.240780  945840 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:35:23.240853  945840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992970
	I1001 20:35:23.257844  945840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/old-k8s-version-992970/id_rsa Username:docker}
	I1001 20:35:23.353030  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 20:35:23.382502  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1001 20:35:23.406498  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:35:23.431181  945840 provision.go:87] duration metric: took 1.110223771s to configureAuth
	I1001 20:35:23.431253  945840 ubuntu.go:193] setting minikube options for container-runtime
	I1001 20:35:23.431483  945840 config.go:182] Loaded profile config "old-k8s-version-992970": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1001 20:35:23.431513  945840 machine.go:96] duration metric: took 4.619638715s to provisionDockerMachine
	I1001 20:35:23.431534  945840 start.go:293] postStartSetup for "old-k8s-version-992970" (driver="docker")
	I1001 20:35:23.431557  945840 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:35:23.431645  945840 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:35:23.431718  945840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992970
	I1001 20:35:23.466216  945840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/old-k8s-version-992970/id_rsa Username:docker}
	I1001 20:35:23.562586  945840 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:35:23.566196  945840 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1001 20:35:23.566237  945840 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1001 20:35:23.566248  945840 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1001 20:35:23.566256  945840 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1001 20:35:23.566269  945840 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-735883/.minikube/addons for local assets ...
	I1001 20:35:23.566330  945840 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-735883/.minikube/files for local assets ...
	I1001 20:35:23.566413  945840 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-735883/.minikube/files/etc/ssl/certs/7412642.pem -> 7412642.pem in /etc/ssl/certs
	I1001 20:35:23.566525  945840 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:35:23.575996  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/files/etc/ssl/certs/7412642.pem --> /etc/ssl/certs/7412642.pem (1708 bytes)
	I1001 20:35:23.602588  945840 start.go:296] duration metric: took 171.027382ms for postStartSetup
	I1001 20:35:23.602673  945840 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 20:35:23.602722  945840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992970
	I1001 20:35:23.620516  945840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/old-k8s-version-992970/id_rsa Username:docker}
	I1001 20:35:23.714163  945840 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1001 20:35:23.719287  945840 fix.go:56] duration metric: took 5.322412179s for fixHost
	I1001 20:35:23.719310  945840 start.go:83] releasing machines lock for "old-k8s-version-992970", held for 5.322464822s
	I1001 20:35:23.719377  945840 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-992970
	I1001 20:35:23.737133  945840 ssh_runner.go:195] Run: cat /version.json
	I1001 20:35:23.737191  945840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992970
	I1001 20:35:23.737430  945840 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:35:23.737491  945840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992970
	I1001 20:35:23.770003  945840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/old-k8s-version-992970/id_rsa Username:docker}
	I1001 20:35:23.776111  945840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/old-k8s-version-992970/id_rsa Username:docker}
	I1001 20:35:23.999125  945840 ssh_runner.go:195] Run: systemctl --version
	I1001 20:35:24.004089  945840 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 20:35:24.008979  945840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1001 20:35:24.026838  945840 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1001 20:35:24.026968  945840 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:35:24.035804  945840 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1001 20:35:24.035883  945840 start.go:495] detecting cgroup driver to use...
	I1001 20:35:24.035932  945840 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1001 20:35:24.036013  945840 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1001 20:35:24.050866  945840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 20:35:24.063703  945840 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:35:24.063827  945840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:35:24.077720  945840 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:35:24.090059  945840 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:35:24.222801  945840 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:35:24.330296  945840 docker.go:233] disabling docker service ...
	I1001 20:35:24.330361  945840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:35:24.345037  945840 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:35:24.357895  945840 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:35:24.476131  945840 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:35:24.581861  945840 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:35:24.595565  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:35:24.613112  945840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1001 20:35:24.626272  945840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1001 20:35:24.636620  945840 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1001 20:35:24.636690  945840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1001 20:35:24.647164  945840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 20:35:24.657144  945840 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1001 20:35:24.666794  945840 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 20:35:24.676831  945840 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:35:24.685959  945840 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1001 20:35:24.695874  945840 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:35:24.704912  945840 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:35:24.713673  945840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:35:24.820104  945840 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1001 20:35:25.062353  945840 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1001 20:35:25.062468  945840 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1001 20:35:25.066839  945840 start.go:563] Will wait 60s for crictl version
	I1001 20:35:25.066910  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:35:25.070713  945840 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:35:25.127821  945840 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1001 20:35:25.127893  945840 ssh_runner.go:195] Run: containerd --version
	I1001 20:35:25.154409  945840 ssh_runner.go:195] Run: containerd --version
	I1001 20:35:25.179236  945840 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1001 20:35:25.181897  945840 cli_runner.go:164] Run: docker network inspect old-k8s-version-992970 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 20:35:25.198106  945840 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1001 20:35:25.202058  945840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:35:25.216257  945840 kubeadm.go:883] updating cluster {Name:old-k8s-version-992970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-992970 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:
/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:35:25.216392  945840 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1001 20:35:25.216468  945840 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:35:25.266312  945840 containerd.go:627] all images are preloaded for containerd runtime.
	I1001 20:35:25.266339  945840 containerd.go:534] Images already preloaded, skipping extraction
	I1001 20:35:25.266414  945840 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:35:25.315291  945840 containerd.go:627] all images are preloaded for containerd runtime.
	I1001 20:35:25.315323  945840 cache_images.go:84] Images are preloaded, skipping loading
	I1001 20:35:25.315348  945840 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I1001 20:35:25.315485  945840 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-992970 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-992970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:35:25.315580  945840 ssh_runner.go:195] Run: sudo crictl info
	I1001 20:35:25.361867  945840 cni.go:84] Creating CNI manager for ""
	I1001 20:35:25.361895  945840 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 20:35:25.361907  945840 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:35:25.361929  945840 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-992970 NodeName:old-k8s-version-992970 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1001 20:35:25.362063  945840 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-992970"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:35:25.362133  945840 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1001 20:35:25.371349  945840 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:35:25.371443  945840 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:35:25.381093  945840 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1001 20:35:25.399688  945840 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:35:25.418765  945840 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1001 20:35:25.437848  945840 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1001 20:35:25.441425  945840 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:35:25.452587  945840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:35:25.557240  945840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:35:25.573502  945840 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970 for IP: 192.168.76.2
	I1001 20:35:25.573539  945840 certs.go:194] generating shared ca certs ...
	I1001 20:35:25.573573  945840 certs.go:226] acquiring lock for ca certs: {Name:mk132cf96fd4e71a64bde5e1335b23d155d99f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:35:25.573761  945840 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-735883/.minikube/ca.key
	I1001 20:35:25.573828  945840 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.key
	I1001 20:35:25.573859  945840 certs.go:256] generating profile certs ...
	I1001 20:35:25.573979  945840 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.key
	I1001 20:35:25.574099  945840 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/apiserver.key.20153d39
	I1001 20:35:25.574174  945840 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/proxy-client.key
	I1001 20:35:25.574329  945840 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/741264.pem (1338 bytes)
	W1001 20:35:25.574391  945840 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-735883/.minikube/certs/741264_empty.pem, impossibly tiny 0 bytes
	I1001 20:35:25.574407  945840 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca-key.pem (1675 bytes)
	I1001 20:35:25.574453  945840 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem (1078 bytes)
	I1001 20:35:25.574497  945840 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:35:25.574541  945840 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/key.pem (1679 bytes)
	I1001 20:35:25.574622  945840 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/files/etc/ssl/certs/7412642.pem (1708 bytes)
	I1001 20:35:25.575385  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:35:25.603610  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1001 20:35:25.630462  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:35:25.665994  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 20:35:25.709535  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1001 20:35:25.750557  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1001 20:35:25.780799  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:35:25.806217  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 20:35:25.834459  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:35:25.874716  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/certs/741264.pem --> /usr/share/ca-certificates/741264.pem (1338 bytes)
	I1001 20:35:25.907164  945840 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/files/etc/ssl/certs/7412642.pem --> /usr/share/ca-certificates/7412642.pem (1708 bytes)
	I1001 20:35:25.934184  945840 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:35:25.953523  945840 ssh_runner.go:195] Run: openssl version
	I1001 20:35:25.959741  945840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:35:25.969813  945840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:35:25.973694  945840 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:35:25.973778  945840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:35:25.981905  945840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:35:25.991537  945840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/741264.pem && ln -fs /usr/share/ca-certificates/741264.pem /etc/ssl/certs/741264.pem"
	I1001 20:35:26.001533  945840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741264.pem
	I1001 20:35:26.006075  945840 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:57 /usr/share/ca-certificates/741264.pem
	I1001 20:35:26.006168  945840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741264.pem
	I1001 20:35:26.013720  945840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/741264.pem /etc/ssl/certs/51391683.0"
	I1001 20:35:26.023739  945840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7412642.pem && ln -fs /usr/share/ca-certificates/7412642.pem /etc/ssl/certs/7412642.pem"
	I1001 20:35:26.034329  945840 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7412642.pem
	I1001 20:35:26.038287  945840 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:57 /usr/share/ca-certificates/7412642.pem
	I1001 20:35:26.038367  945840 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7412642.pem
	I1001 20:35:26.045740  945840 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7412642.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:35:26.055818  945840 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:35:26.059864  945840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1001 20:35:26.067099  945840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1001 20:35:26.074481  945840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1001 20:35:26.081766  945840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1001 20:35:26.089203  945840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1001 20:35:26.096436  945840 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1001 20:35:26.103527  945840 kubeadm.go:392] StartCluster: {Name:old-k8s-version-992970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-992970 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/ho
me/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:35:26.103631  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1001 20:35:26.103699  945840 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:35:26.159680  945840 cri.go:89] found id: "79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763"
	I1001 20:35:26.159713  945840 cri.go:89] found id: "f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf"
	I1001 20:35:26.159730  945840 cri.go:89] found id: "85c4a4dae15a80c2906ca874e151557d5b6ee157154a49b57e0832949ec02e2d"
	I1001 20:35:26.159734  945840 cri.go:89] found id: "d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b"
	I1001 20:35:26.159738  945840 cri.go:89] found id: "7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251"
	I1001 20:35:26.159742  945840 cri.go:89] found id: "5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff"
	I1001 20:35:26.159746  945840 cri.go:89] found id: "080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7"
	I1001 20:35:26.159749  945840 cri.go:89] found id: "1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a"
	I1001 20:35:26.159753  945840 cri.go:89] found id: ""
	I1001 20:35:26.159817  945840 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1001 20:35:26.172896  945840 cri.go:116] JSON = null
	W1001 20:35:26.172959  945840 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1001 20:35:26.173047  945840 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:35:26.183538  945840 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1001 20:35:26.183560  945840 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1001 20:35:26.183622  945840 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1001 20:35:26.192890  945840 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1001 20:35:26.193406  945840 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-992970" does not appear in /home/jenkins/minikube-integration/19736-735883/kubeconfig
	I1001 20:35:26.193539  945840 kubeconfig.go:62] /home/jenkins/minikube-integration/19736-735883/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-992970" cluster setting kubeconfig missing "old-k8s-version-992970" context setting]
	I1001 20:35:26.193853  945840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/kubeconfig: {Name:mk16c47fd3084557c83466477611ca0e739aa58e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:35:26.195407  945840 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1001 20:35:26.205253  945840 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1001 20:35:26.205298  945840 kubeadm.go:597] duration metric: took 21.7296ms to restartPrimaryControlPlane
	I1001 20:35:26.205313  945840 kubeadm.go:394] duration metric: took 101.803439ms to StartCluster
	I1001 20:35:26.205330  945840 settings.go:142] acquiring lock: {Name:mk46877febca9f587b39958e976b5a1299db9afa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:35:26.205399  945840 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19736-735883/kubeconfig
	I1001 20:35:26.206161  945840 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/kubeconfig: {Name:mk16c47fd3084557c83466477611ca0e739aa58e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:35:26.206406  945840 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1001 20:35:26.206773  945840 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1001 20:35:26.206840  945840 config.go:182] Loaded profile config "old-k8s-version-992970": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1001 20:35:26.206852  945840 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-992970"
	I1001 20:35:26.206878  945840 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-992970"
	I1001 20:35:26.206880  945840 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-992970"
	W1001 20:35:26.206885  945840 addons.go:243] addon storage-provisioner should already be in state true
	I1001 20:35:26.206902  945840 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-992970"
	I1001 20:35:26.206910  945840 host.go:66] Checking if "old-k8s-version-992970" exists ...
	I1001 20:35:26.207386  945840 cli_runner.go:164] Run: docker container inspect old-k8s-version-992970 --format={{.State.Status}}
	I1001 20:35:26.207537  945840 cli_runner.go:164] Run: docker container inspect old-k8s-version-992970 --format={{.State.Status}}
	I1001 20:35:26.207960  945840 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-992970"
	I1001 20:35:26.207995  945840 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-992970"
	W1001 20:35:26.208017  945840 addons.go:243] addon metrics-server should already be in state true
	I1001 20:35:26.208069  945840 host.go:66] Checking if "old-k8s-version-992970" exists ...
	I1001 20:35:26.208548  945840 cli_runner.go:164] Run: docker container inspect old-k8s-version-992970 --format={{.State.Status}}
	I1001 20:35:26.210210  945840 addons.go:69] Setting dashboard=true in profile "old-k8s-version-992970"
	I1001 20:35:26.210245  945840 addons.go:234] Setting addon dashboard=true in "old-k8s-version-992970"
	W1001 20:35:26.210253  945840 addons.go:243] addon dashboard should already be in state true
	I1001 20:35:26.210290  945840 host.go:66] Checking if "old-k8s-version-992970" exists ...
	I1001 20:35:26.210766  945840 cli_runner.go:164] Run: docker container inspect old-k8s-version-992970 --format={{.State.Status}}
	I1001 20:35:26.211033  945840 out.go:177] * Verifying Kubernetes components...
	I1001 20:35:26.220298  945840 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:35:26.278168  945840 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1001 20:35:26.278232  945840 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1001 20:35:26.280513  945840 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1001 20:35:26.280727  945840 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1001 20:35:26.280757  945840 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1001 20:35:26.280834  945840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992970
	I1001 20:35:26.281065  945840 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:35:26.281082  945840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1001 20:35:26.281119  945840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992970
	I1001 20:35:26.287854  945840 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1001 20:35:26.289713  945840 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1001 20:35:26.289733  945840 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1001 20:35:26.289805  945840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992970
	I1001 20:35:26.294491  945840 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-992970"
	W1001 20:35:26.294519  945840 addons.go:243] addon default-storageclass should already be in state true
	I1001 20:35:26.294544  945840 host.go:66] Checking if "old-k8s-version-992970" exists ...
	I1001 20:35:26.294959  945840 cli_runner.go:164] Run: docker container inspect old-k8s-version-992970 --format={{.State.Status}}
	I1001 20:35:26.360626  945840 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1001 20:35:26.360654  945840 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1001 20:35:26.360740  945840 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-992970
	I1001 20:35:26.362116  945840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/old-k8s-version-992970/id_rsa Username:docker}
	I1001 20:35:26.364489  945840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/old-k8s-version-992970/id_rsa Username:docker}
	I1001 20:35:26.370921  945840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/old-k8s-version-992970/id_rsa Username:docker}
	I1001 20:35:26.396547  945840 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33829 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/old-k8s-version-992970/id_rsa Username:docker}
	I1001 20:35:26.466533  945840 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:35:26.495108  945840 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-992970" to be "Ready" ...
	I1001 20:35:26.580989  945840 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1001 20:35:26.581068  945840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1001 20:35:26.615100  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:35:26.619357  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:35:26.645705  945840 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1001 20:35:26.645778  945840 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1001 20:35:26.686971  945840 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1001 20:35:26.687046  945840 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1001 20:35:26.806528  945840 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1001 20:35:26.806610  945840 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1001 20:35:26.807425  945840 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:35:26.807483  945840 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W1001 20:35:26.859264  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:26.859298  945840 retry.go:31] will retry after 243.400698ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1001 20:35:26.859334  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:26.859349  945840 retry.go:31] will retry after 265.474908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:26.866557  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:35:26.876473  945840 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1001 20:35:26.876501  945840 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1001 20:35:26.925352  945840 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1001 20:35:26.925377  945840 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1001 20:35:26.982116  945840 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1001 20:35:26.982142  945840 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1001 20:35:26.992626  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:26.992658  945840 retry.go:31] will retry after 327.200438ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:27.005168  945840 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1001 20:35:27.005194  945840 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1001 20:35:27.023556  945840 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1001 20:35:27.023583  945840 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1001 20:35:27.041532  945840 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1001 20:35:27.041559  945840 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1001 20:35:27.059240  945840 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1001 20:35:27.059264  945840 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1001 20:35:27.077226  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1001 20:35:27.103368  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:35:27.125660  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1001 20:35:27.246069  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:27.246100  945840 retry.go:31] will retry after 363.058006ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:27.320373  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1001 20:35:27.354840  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:27.354873  945840 retry.go:31] will retry after 464.410613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1001 20:35:27.372837  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:27.372869  945840 retry.go:31] will retry after 230.830268ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1001 20:35:27.469913  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:27.469946  945840 retry.go:31] will retry after 295.467703ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:27.604838  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:35:27.609320  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1001 20:35:27.703570  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:27.703712  945840 retry.go:31] will retry after 532.729201ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1001 20:35:27.703663  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:27.703768  945840 retry.go:31] will retry after 429.234277ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:27.765958  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:35:27.820196  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1001 20:35:27.852900  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:27.852982  945840 retry.go:31] will retry after 334.656549ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1001 20:35:27.941475  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:27.941557  945840 retry.go:31] will retry after 688.026414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:28.133390  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1001 20:35:28.187823  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:35:28.237006  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1001 20:35:28.319865  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:28.319911  945840 retry.go:31] will retry after 504.132692ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1001 20:35:28.455515  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:28.455606  945840 retry.go:31] will retry after 1.107161069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1001 20:35:28.478641  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:28.478728  945840 retry.go:31] will retry after 1.069956548s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:28.496307  945840 node_ready.go:53] error getting node "old-k8s-version-992970": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-992970": dial tcp 192.168.76.2:8443: connect: connection refused
	I1001 20:35:28.630749  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1001 20:35:28.782084  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:28.782169  945840 retry.go:31] will retry after 573.268101ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:28.824439  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1001 20:35:28.961603  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:28.961695  945840 retry.go:31] will retry after 962.014636ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:29.355713  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1001 20:35:29.488943  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:29.489044  945840 retry.go:31] will retry after 1.081883568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:29.549853  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:35:29.563290  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1001 20:35:29.732175  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:29.732266  945840 retry.go:31] will retry after 1.172165614s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1001 20:35:29.764213  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:29.764298  945840 retry.go:31] will retry after 1.250181221s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:29.924609  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1001 20:35:30.076323  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:30.076421  945840 retry.go:31] will retry after 1.089952678s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:30.571615  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1001 20:35:30.689536  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:30.689618  945840 retry.go:31] will retry after 2.713140224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:30.904828  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:35:30.995654  945840 node_ready.go:53] error getting node "old-k8s-version-992970": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-992970": dial tcp 192.168.76.2:8443: connect: connection refused
	I1001 20:35:31.015014  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1001 20:35:31.106226  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:31.106314  945840 retry.go:31] will retry after 2.446751769s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:31.167492  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1001 20:35:31.204960  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:31.205061  945840 retry.go:31] will retry after 2.518270485s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1001 20:35:31.319324  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:31.319404  945840 retry.go:31] will retry after 2.04746681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:32.995930  945840 node_ready.go:53] error getting node "old-k8s-version-992970": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-992970": dial tcp 192.168.76.2:8443: connect: connection refused
	I1001 20:35:33.367836  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1001 20:35:33.403377  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1001 20:35:33.540162  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:33.540242  945840 retry.go:31] will retry after 1.70213914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:33.553596  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1001 20:35:33.598628  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:33.598723  945840 retry.go:31] will retry after 1.627532081s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1001 20:35:33.695202  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:33.695283  945840 retry.go:31] will retry after 3.527487505s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:33.723536  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1001 20:35:33.832836  945840 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:33.832868  945840 retry.go:31] will retry after 4.163084359s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1001 20:35:35.227305  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1001 20:35:35.242655  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1001 20:35:37.223646  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1001 20:35:37.996357  945840 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1001 20:35:45.496359  945840 node_ready.go:53] error getting node "old-k8s-version-992970": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-992970": net/http: TLS handshake timeout
	I1001 20:35:46.195518  945840 node_ready.go:49] node "old-k8s-version-992970" has status "Ready":"True"
	I1001 20:35:46.195545  945840 node_ready.go:38] duration metric: took 19.700402818s for node "old-k8s-version-992970" to be "Ready" ...
	I1001 20:35:46.195555  945840 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:35:46.335464  945840 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-tssxl" in "kube-system" namespace to be "Ready" ...
	I1001 20:35:46.358632  945840 pod_ready.go:93] pod "coredns-74ff55c5b-tssxl" in "kube-system" namespace has status "Ready":"True"
	I1001 20:35:46.358706  945840 pod_ready.go:82] duration metric: took 23.143633ms for pod "coredns-74ff55c5b-tssxl" in "kube-system" namespace to be "Ready" ...
	I1001 20:35:46.358732  945840 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-992970" in "kube-system" namespace to be "Ready" ...
	I1001 20:35:46.385353  945840 pod_ready.go:93] pod "etcd-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"True"
	I1001 20:35:46.385374  945840 pod_ready.go:82] duration metric: took 26.621773ms for pod "etcd-old-k8s-version-992970" in "kube-system" namespace to be "Ready" ...
	I1001 20:35:46.385388  945840 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-992970" in "kube-system" namespace to be "Ready" ...
	I1001 20:35:47.102670  945840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.875323062s)
	I1001 20:35:47.252347  945840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.028660565s)
	I1001 20:35:47.252732  945840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.256338006s)
	I1001 20:35:47.252784  945840 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-992970"
	I1001 20:35:47.252853  945840 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (12.010161712s)
	I1001 20:35:47.254919  945840 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-992970 addons enable metrics-server
	
	I1001 20:35:47.261652  945840 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1001 20:35:47.263961  945840 addons.go:510] duration metric: took 21.057192026s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1001 20:35:48.391377  945840 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:35:50.391701  945840 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:35:52.394893  945840 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:35:54.891504  945840 pod_ready.go:103] pod "kube-apiserver-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:35:55.391833  945840 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"True"
	I1001 20:35:55.391857  945840 pod_ready.go:82] duration metric: took 9.006461722s for pod "kube-apiserver-old-k8s-version-992970" in "kube-system" namespace to be "Ready" ...
	I1001 20:35:55.391867  945840 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace to be "Ready" ...
	I1001 20:35:57.398744  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:35:59.898728  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:02.397489  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:04.398609  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:06.408369  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:08.911376  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:11.399632  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:13.400296  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:15.899227  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:18.399066  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:20.898032  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:22.898463  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:25.398510  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:27.898081  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:29.898912  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:32.397865  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:34.404447  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:36.900288  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:39.399452  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:41.899230  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:43.901231  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:45.988134  945840 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:48.398228  945840 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"True"
	I1001 20:36:48.398254  945840 pod_ready.go:82] duration metric: took 53.00637914s for pod "kube-controller-manager-old-k8s-version-992970" in "kube-system" namespace to be "Ready" ...
	I1001 20:36:48.398266  945840 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-qmc4m" in "kube-system" namespace to be "Ready" ...
	I1001 20:36:48.403208  945840 pod_ready.go:93] pod "kube-proxy-qmc4m" in "kube-system" namespace has status "Ready":"True"
	I1001 20:36:48.403233  945840 pod_ready.go:82] duration metric: took 4.959905ms for pod "kube-proxy-qmc4m" in "kube-system" namespace to be "Ready" ...
	I1001 20:36:48.403244  945840 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-992970" in "kube-system" namespace to be "Ready" ...
	I1001 20:36:50.410297  945840 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:52.910035  945840 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:55.410198  945840 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:57.909648  945840 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:36:59.909967  945840 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:02.409700  945840 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:04.909985  945840 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:07.409553  945840 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:09.409651  945840 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:11.409080  945840 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-992970" in "kube-system" namespace has status "Ready":"True"
	I1001 20:37:11.409121  945840 pod_ready.go:82] duration metric: took 23.005868055s for pod "kube-scheduler-old-k8s-version-992970" in "kube-system" namespace to be "Ready" ...
	I1001 20:37:11.409133  945840 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace to be "Ready" ...
	I1001 20:37:13.414784  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:15.415356  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:17.415942  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:19.915392  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:22.416418  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:24.915921  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:27.414396  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:29.424051  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:31.915447  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:33.916069  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:36.425543  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:38.914957  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:41.415004  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:43.915351  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:46.414289  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:48.415445  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:50.914959  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:52.915573  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:55.414853  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:57.417432  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:37:59.915974  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:02.416624  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:04.915588  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:07.417117  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:09.915309  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:12.415686  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:14.915798  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:17.415983  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:19.914809  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:21.915344  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:23.917879  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:26.417922  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:28.914927  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:30.915063  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:33.415230  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:35.914859  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:37.915900  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:40.415055  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:42.915098  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:44.915198  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:46.915373  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:48.915688  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:50.916029  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:53.415462  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:55.915170  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:38:58.414970  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:00.415433  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:02.915563  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:04.916011  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:07.416060  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:09.916121  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:12.415237  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:14.426086  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:16.914747  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:18.915268  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:20.915370  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:22.915882  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:25.415619  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:27.915074  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:29.915255  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:31.915388  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:34.415958  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:36.416058  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:38.416421  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:40.914736  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:42.915196  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:44.915426  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:46.915580  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:49.416796  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:51.422852  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:53.915313  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:55.915432  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:39:58.415122  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:00.416082  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:02.416170  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:04.915074  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:06.915658  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:09.416104  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:11.916003  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:14.414937  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:16.416377  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:18.915349  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:20.915517  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:22.921527  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:25.415154  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:27.415458  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:29.915437  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:32.415834  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:34.916116  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:37.416365  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:39.915553  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:42.417611  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:44.915834  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:46.917003  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:49.416186  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:51.915029  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:54.414486  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:56.421392  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:40:58.915480  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:41:01.415240  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:41:03.415709  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:41:05.914576  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:41:07.916800  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:41:10.415500  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:41:11.416776  945840 pod_ready.go:82] duration metric: took 4m0.007627658s for pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace to be "Ready" ...
	E1001 20:41:11.416797  945840 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1001 20:41:11.416806  945840 pod_ready.go:39] duration metric: took 5m25.221237449s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:41:11.416819  945840 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:41:11.416849  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:41:11.416907  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:41:11.474408  945840 cri.go:89] found id: "52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2"
	I1001 20:41:11.474428  945840 cri.go:89] found id: "5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff"
	I1001 20:41:11.474433  945840 cri.go:89] found id: ""
	I1001 20:41:11.474440  945840 logs.go:276] 2 containers: [52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2 5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff]
	I1001 20:41:11.474502  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.479240  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.483755  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1001 20:41:11.483831  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:41:11.577187  945840 cri.go:89] found id: "2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15"
	I1001 20:41:11.577210  945840 cri.go:89] found id: "1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a"
	I1001 20:41:11.577215  945840 cri.go:89] found id: ""
	I1001 20:41:11.577223  945840 logs.go:276] 2 containers: [2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15 1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a]
	I1001 20:41:11.577279  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.586090  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.591567  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1001 20:41:11.591710  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:41:11.660526  945840 cri.go:89] found id: "9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648"
	I1001 20:41:11.660601  945840 cri.go:89] found id: "79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763"
	I1001 20:41:11.660620  945840 cri.go:89] found id: ""
	I1001 20:41:11.660642  945840 logs.go:276] 2 containers: [9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648 79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763]
	I1001 20:41:11.660722  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.664752  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.668594  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:41:11.668718  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:41:11.739073  945840 cri.go:89] found id: "4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6"
	I1001 20:41:11.739147  945840 cri.go:89] found id: "7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251"
	I1001 20:41:11.739167  945840 cri.go:89] found id: ""
	I1001 20:41:11.739190  945840 logs.go:276] 2 containers: [4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6 7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251]
	I1001 20:41:11.739274  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.742663  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.746257  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:41:11.746327  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:41:11.801184  945840 cri.go:89] found id: "0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c"
	I1001 20:41:11.801209  945840 cri.go:89] found id: "d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b"
	I1001 20:41:11.801214  945840 cri.go:89] found id: ""
	I1001 20:41:11.801221  945840 logs.go:276] 2 containers: [0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b]
	I1001 20:41:11.801277  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.804844  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.808053  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:41:11.808123  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:41:11.857078  945840 cri.go:89] found id: "328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6"
	I1001 20:41:11.857153  945840 cri.go:89] found id: "080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7"
	I1001 20:41:11.857173  945840 cri.go:89] found id: ""
	I1001 20:41:11.857194  945840 logs.go:276] 2 containers: [328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6 080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7]
	I1001 20:41:11.857278  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.861068  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.864473  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1001 20:41:11.864591  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:41:11.912862  945840 cri.go:89] found id: "c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c"
	I1001 20:41:11.912884  945840 cri.go:89] found id: "f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf"
	I1001 20:41:11.912889  945840 cri.go:89] found id: ""
	I1001 20:41:11.912896  945840 logs.go:276] 2 containers: [c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf]
	I1001 20:41:11.912952  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.917077  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.921003  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:41:11.921079  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:41:11.978864  945840 cri.go:89] found id: "25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10"
	I1001 20:41:11.978891  945840 cri.go:89] found id: "24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3"
	I1001 20:41:11.978896  945840 cri.go:89] found id: ""
	I1001 20:41:11.978903  945840 logs.go:276] 2 containers: [25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10 24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3]
	I1001 20:41:11.978965  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.982666  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.985955  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:41:11.986029  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:41:12.051716  945840 cri.go:89] found id: "9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3"
	I1001 20:41:12.051755  945840 cri.go:89] found id: ""
	I1001 20:41:12.051764  945840 logs.go:276] 1 containers: [9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3]
	I1001 20:41:12.051819  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:12.058110  945840 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:41:12.058132  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:41:12.242264  945840 logs.go:123] Gathering logs for kube-apiserver [5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff] ...
	I1001 20:41:12.242302  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff"
	I1001 20:41:12.341006  945840 logs.go:123] Gathering logs for kube-scheduler [4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6] ...
	I1001 20:41:12.341039  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6"
	I1001 20:41:12.411358  945840 logs.go:123] Gathering logs for container status ...
	I1001 20:41:12.411385  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:41:12.480370  945840 logs.go:123] Gathering logs for kubelet ...
	I1001 20:41:12.480501  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 20:41:12.550748  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.979627     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-bktfz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-bktfz" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.551045  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.979931     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hcc5p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hcc5p" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.551607  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.980146     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.551863  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.983725     664 reflector.go:138] object-"kube-system"/"coredns-token-q5kkb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-q5kkb" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.552196  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.983973     664 reflector.go:138] object-"kube-system"/"metrics-server-token-qwlv6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-qwlv6" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.552442  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.984045     664 reflector.go:138] object-"default"/"default-token-h86wg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-h86wg" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.552681  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.984119     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.552982  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.984187     664 reflector.go:138] object-"kube-system"/"kindnet-token-pv9f8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-pv9f8" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.564423  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:48 old-k8s-version-992970 kubelet[664]: E1001 20:35:48.311000     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:12.565307  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:48 old-k8s-version-992970 kubelet[664]: E1001 20:35:48.692405     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.568137  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:02 old-k8s-version-992970 kubelet[664]: E1001 20:36:02.354833     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:12.568580  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:03 old-k8s-version-992970 kubelet[664]: E1001 20:36:03.679908     664 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-npz8p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-npz8p" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.574176  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:11 old-k8s-version-992970 kubelet[664]: E1001 20:36:11.784869     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.574810  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:12 old-k8s-version-992970 kubelet[664]: E1001 20:36:12.783230     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.575603  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:16 old-k8s-version-992970 kubelet[664]: E1001 20:36:16.435246     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.575826  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:17 old-k8s-version-992970 kubelet[664]: E1001 20:36:17.345729     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.576304  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:19 old-k8s-version-992970 kubelet[664]: E1001 20:36:19.814778     664 pod_workers.go:191] Error syncing pod 71d7d681-3057-4e08-8ce0-dd68e87dfd26 ("storage-provisioner_kube-system(71d7d681-3057-4e08-8ce0-dd68e87dfd26)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(71d7d681-3057-4e08-8ce0-dd68e87dfd26)"
	W1001 20:41:12.577268  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:29 old-k8s-version-992970 kubelet[664]: E1001 20:36:29.844345     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.579838  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:32 old-k8s-version-992970 kubelet[664]: E1001 20:36:32.373391     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:12.580336  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:36 old-k8s-version-992970 kubelet[664]: E1001 20:36:36.434745     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.580572  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:47 old-k8s-version-992970 kubelet[664]: E1001 20:36:47.345072     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.580931  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:49 old-k8s-version-992970 kubelet[664]: E1001 20:36:49.344533     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.581147  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:59 old-k8s-version-992970 kubelet[664]: E1001 20:36:59.345227     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.581760  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:01 old-k8s-version-992970 kubelet[664]: E1001 20:37:01.943400     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.582112  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:06 old-k8s-version-992970 kubelet[664]: E1001 20:37:06.435341     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.584773  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:13 old-k8s-version-992970 kubelet[664]: E1001 20:37:13.354154     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:12.585163  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:17 old-k8s-version-992970 kubelet[664]: E1001 20:37:17.345057     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.585546  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:28 old-k8s-version-992970 kubelet[664]: E1001 20:37:28.349670     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.585787  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:28 old-k8s-version-992970 kubelet[664]: E1001 20:37:28.364204     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.586466  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:43 old-k8s-version-992970 kubelet[664]: E1001 20:37:43.051579     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.586700  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:43 old-k8s-version-992970 kubelet[664]: E1001 20:37:43.345097     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.587094  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:46 old-k8s-version-992970 kubelet[664]: E1001 20:37:46.435452     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.587332  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:58 old-k8s-version-992970 kubelet[664]: E1001 20:37:58.344936     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.587728  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:00 old-k8s-version-992970 kubelet[664]: E1001 20:38:00.344778     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.588153  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:13 old-k8s-version-992970 kubelet[664]: E1001 20:38:13.344526     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.588395  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:13 old-k8s-version-992970 kubelet[664]: E1001 20:38:13.345426     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.588649  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:25 old-k8s-version-992970 kubelet[664]: E1001 20:38:25.345458     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.589054  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:26 old-k8s-version-992970 kubelet[664]: E1001 20:38:26.344683     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.591545  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:36 old-k8s-version-992970 kubelet[664]: E1001 20:38:36.353928     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:12.591913  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:38 old-k8s-version-992970 kubelet[664]: E1001 20:38:38.345052     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.592266  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:50 old-k8s-version-992970 kubelet[664]: E1001 20:38:50.345092     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.592488  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:51 old-k8s-version-992970 kubelet[664]: E1001 20:38:51.344896     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.592849  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:01 old-k8s-version-992970 kubelet[664]: E1001 20:39:01.344547     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.593066  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:02 old-k8s-version-992970 kubelet[664]: E1001 20:39:02.345834     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.593278  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:13 old-k8s-version-992970 kubelet[664]: E1001 20:39:13.344915     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.593891  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:15 old-k8s-version-992970 kubelet[664]: E1001 20:39:15.277864     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.594242  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:16 old-k8s-version-992970 kubelet[664]: E1001 20:39:16.434671     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.594496  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:26 old-k8s-version-992970 kubelet[664]: E1001 20:39:26.350298     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.594854  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:31 old-k8s-version-992970 kubelet[664]: E1001 20:39:31.344495     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.595324  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:38 old-k8s-version-992970 kubelet[664]: E1001 20:39:38.344941     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.595744  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:43 old-k8s-version-992970 kubelet[664]: E1001 20:39:43.344445     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.595957  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:49 old-k8s-version-992970 kubelet[664]: E1001 20:39:49.344870     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.599123  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:54 old-k8s-version-992970 kubelet[664]: E1001 20:39:54.344924     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.599369  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:00 old-k8s-version-992970 kubelet[664]: E1001 20:40:00.348991     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.599828  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:07 old-k8s-version-992970 kubelet[664]: E1001 20:40:07.344581     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.600062  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:11 old-k8s-version-992970 kubelet[664]: E1001 20:40:11.347785     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.600816  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:21 old-k8s-version-992970 kubelet[664]: E1001 20:40:21.344561     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.601059  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:26 old-k8s-version-992970 kubelet[664]: E1001 20:40:26.345878     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.601415  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:32 old-k8s-version-992970 kubelet[664]: E1001 20:40:32.344565     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.601626  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:37 old-k8s-version-992970 kubelet[664]: E1001 20:40:37.344837     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.601998  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:47 old-k8s-version-992970 kubelet[664]: E1001 20:40:47.345723     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.602224  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:49 old-k8s-version-992970 kubelet[664]: E1001 20:40:49.344983     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.602617  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:00 old-k8s-version-992970 kubelet[664]: E1001 20:41:00.344524     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.602837  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:03 old-k8s-version-992970 kubelet[664]: E1001 20:41:03.344940     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.603193  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:11 old-k8s-version-992970 kubelet[664]: E1001 20:41:11.344510     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	I1001 20:41:12.603225  945840 logs.go:123] Gathering logs for kube-scheduler [7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251] ...
	I1001 20:41:12.603254  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251"
	I1001 20:41:12.658419  945840 logs.go:123] Gathering logs for kube-proxy [0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c] ...
	I1001 20:41:12.658497  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c"
	I1001 20:41:12.710694  945840 logs.go:123] Gathering logs for kube-controller-manager [328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6] ...
	I1001 20:41:12.710721  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6"
	I1001 20:41:12.809260  945840 logs.go:123] Gathering logs for kube-controller-manager [080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7] ...
	I1001 20:41:12.809297  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7"
	I1001 20:41:12.935124  945840 logs.go:123] Gathering logs for kindnet [c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c] ...
	I1001 20:41:12.935346  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c"
	I1001 20:41:13.008699  945840 logs.go:123] Gathering logs for kube-apiserver [52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2] ...
	I1001 20:41:13.008767  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2"
	I1001 20:41:13.086831  945840 logs.go:123] Gathering logs for etcd [1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a] ...
	I1001 20:41:13.086867  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a"
	I1001 20:41:13.151346  945840 logs.go:123] Gathering logs for coredns [79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763] ...
	I1001 20:41:13.151382  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763"
	I1001 20:41:13.200242  945840 logs.go:123] Gathering logs for storage-provisioner [25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10] ...
	I1001 20:41:13.200279  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10"
	I1001 20:41:13.247808  945840 logs.go:123] Gathering logs for storage-provisioner [24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3] ...
	I1001 20:41:13.247837  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3"
	I1001 20:41:13.305453  945840 logs.go:123] Gathering logs for kubernetes-dashboard [9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3] ...
	I1001 20:41:13.305482  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3"
	I1001 20:41:13.431378  945840 logs.go:123] Gathering logs for containerd ...
	I1001 20:41:13.431414  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1001 20:41:13.507547  945840 logs.go:123] Gathering logs for dmesg ...
	I1001 20:41:13.507624  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:41:13.528688  945840 logs.go:123] Gathering logs for etcd [2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15] ...
	I1001 20:41:13.528715  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15"
	I1001 20:41:13.643064  945840 logs.go:123] Gathering logs for coredns [9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648] ...
	I1001 20:41:13.643141  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648"
	I1001 20:41:13.718811  945840 logs.go:123] Gathering logs for kube-proxy [d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b] ...
	I1001 20:41:13.718840  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b"
	I1001 20:41:13.790938  945840 logs.go:123] Gathering logs for kindnet [f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf] ...
	I1001 20:41:13.790981  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf"
	I1001 20:41:13.864634  945840 out.go:358] Setting ErrFile to fd 2...
	I1001 20:41:13.864656  945840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 20:41:13.864713  945840 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 20:41:13.864723  945840 out.go:270]   Oct 01 20:40:47 old-k8s-version-992970 kubelet[664]: E1001 20:40:47.345723     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	  Oct 01 20:40:47 old-k8s-version-992970 kubelet[664]: E1001 20:40:47.345723     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:13.864731  945840 out.go:270]   Oct 01 20:40:49 old-k8s-version-992970 kubelet[664]: E1001 20:40:49.344983     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 01 20:40:49 old-k8s-version-992970 kubelet[664]: E1001 20:40:49.344983     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:13.864739  945840 out.go:270]   Oct 01 20:41:00 old-k8s-version-992970 kubelet[664]: E1001 20:41:00.344524     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	  Oct 01 20:41:00 old-k8s-version-992970 kubelet[664]: E1001 20:41:00.344524     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:13.864745  945840 out.go:270]   Oct 01 20:41:03 old-k8s-version-992970 kubelet[664]: E1001 20:41:03.344940     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 01 20:41:03 old-k8s-version-992970 kubelet[664]: E1001 20:41:03.344940     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:13.864755  945840 out.go:270]   Oct 01 20:41:11 old-k8s-version-992970 kubelet[664]: E1001 20:41:11.344510     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	  Oct 01 20:41:11 old-k8s-version-992970 kubelet[664]: E1001 20:41:11.344510     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	I1001 20:41:13.864760  945840 out.go:358] Setting ErrFile to fd 2...
	I1001 20:41:13.864766  945840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:41:23.866028  945840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:41:23.881517  945840 api_server.go:72] duration metric: took 5m57.675060864s to wait for apiserver process to appear ...
	I1001 20:41:23.881538  945840 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:41:23.881574  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:41:23.881630  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:41:23.934034  945840 cri.go:89] found id: "52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2"
	I1001 20:41:23.934053  945840 cri.go:89] found id: "5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff"
	I1001 20:41:23.934058  945840 cri.go:89] found id: ""
	I1001 20:41:23.934065  945840 logs.go:276] 2 containers: [52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2 5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff]
	I1001 20:41:23.934131  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:23.938366  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:23.942113  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1001 20:41:23.942178  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:41:24.002772  945840 cri.go:89] found id: "2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15"
	I1001 20:41:24.002793  945840 cri.go:89] found id: "1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a"
	I1001 20:41:24.002799  945840 cri.go:89] found id: ""
	I1001 20:41:24.002806  945840 logs.go:276] 2 containers: [2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15 1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a]
	I1001 20:41:24.002864  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.007635  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.011475  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1001 20:41:24.011546  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:41:24.062881  945840 cri.go:89] found id: "9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648"
	I1001 20:41:24.062961  945840 cri.go:89] found id: "79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763"
	I1001 20:41:24.062983  945840 cri.go:89] found id: ""
	I1001 20:41:24.063006  945840 logs.go:276] 2 containers: [9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648 79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763]
	I1001 20:41:24.063110  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.067221  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.070908  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:41:24.071040  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:41:24.122643  945840 cri.go:89] found id: "4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6"
	I1001 20:41:24.122664  945840 cri.go:89] found id: "7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251"
	I1001 20:41:24.122670  945840 cri.go:89] found id: ""
	I1001 20:41:24.122678  945840 logs.go:276] 2 containers: [4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6 7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251]
	I1001 20:41:24.122739  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.126795  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.130802  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:41:24.130927  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:41:24.181796  945840 cri.go:89] found id: "0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c"
	I1001 20:41:24.181870  945840 cri.go:89] found id: "d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b"
	I1001 20:41:24.181891  945840 cri.go:89] found id: ""
	I1001 20:41:24.181913  945840 logs.go:276] 2 containers: [0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b]
	I1001 20:41:24.182000  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.186431  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.190291  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:41:24.190413  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:41:24.239513  945840 cri.go:89] found id: "328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6"
	I1001 20:41:24.239585  945840 cri.go:89] found id: "080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7"
	I1001 20:41:24.239604  945840 cri.go:89] found id: ""
	I1001 20:41:24.239626  945840 logs.go:276] 2 containers: [328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6 080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7]
	I1001 20:41:24.239711  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.244080  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.247753  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1001 20:41:24.247869  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:41:24.299189  945840 cri.go:89] found id: "c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c"
	I1001 20:41:24.299257  945840 cri.go:89] found id: "f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf"
	I1001 20:41:24.299279  945840 cri.go:89] found id: ""
	I1001 20:41:24.299300  945840 logs.go:276] 2 containers: [c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf]
	I1001 20:41:24.299384  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.303343  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.307243  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:41:24.307309  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:41:24.368940  945840 cri.go:89] found id: "25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10"
	I1001 20:41:24.368959  945840 cri.go:89] found id: "24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3"
	I1001 20:41:24.368964  945840 cri.go:89] found id: ""
	I1001 20:41:24.368971  945840 logs.go:276] 2 containers: [25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10 24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3]
	I1001 20:41:24.369025  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.373383  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.383396  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:41:24.383465  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:41:24.460516  945840 cri.go:89] found id: "9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3"
	I1001 20:41:24.460535  945840 cri.go:89] found id: ""
	I1001 20:41:24.460543  945840 logs.go:276] 1 containers: [9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3]
	I1001 20:41:24.460602  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.465930  945840 logs.go:123] Gathering logs for coredns [9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648] ...
	I1001 20:41:24.466006  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648"
	I1001 20:41:24.533082  945840 logs.go:123] Gathering logs for coredns [79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763] ...
	I1001 20:41:24.533175  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763"
	I1001 20:41:24.608751  945840 logs.go:123] Gathering logs for containerd ...
	I1001 20:41:24.608781  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1001 20:41:24.686077  945840 logs.go:123] Gathering logs for container status ...
	I1001 20:41:24.686117  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:41:24.749063  945840 logs.go:123] Gathering logs for kubernetes-dashboard [9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3] ...
	I1001 20:41:24.749093  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3"
	I1001 20:41:24.813183  945840 logs.go:123] Gathering logs for kubelet ...
	I1001 20:41:24.813211  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 20:41:24.874850  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.979627     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-bktfz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-bktfz" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.875094  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.979931     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hcc5p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hcc5p" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.875690  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.980146     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.875929  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.983725     664 reflector.go:138] object-"kube-system"/"coredns-token-q5kkb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-q5kkb" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.876172  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.983973     664 reflector.go:138] object-"kube-system"/"metrics-server-token-qwlv6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-qwlv6" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.876545  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.984045     664 reflector.go:138] object-"default"/"default-token-h86wg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-h86wg" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.876777  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.984119     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.877019  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.984187     664 reflector.go:138] object-"kube-system"/"kindnet-token-pv9f8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-pv9f8" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.885469  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:48 old-k8s-version-992970 kubelet[664]: E1001 20:35:48.311000     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:24.886649  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:48 old-k8s-version-992970 kubelet[664]: E1001 20:35:48.692405     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.889553  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:02 old-k8s-version-992970 kubelet[664]: E1001 20:36:02.354833     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:24.889958  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:03 old-k8s-version-992970 kubelet[664]: E1001 20:36:03.679908     664 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-npz8p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-npz8p" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.893883  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:11 old-k8s-version-992970 kubelet[664]: E1001 20:36:11.784869     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.894223  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:12 old-k8s-version-992970 kubelet[664]: E1001 20:36:12.783230     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.894958  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:16 old-k8s-version-992970 kubelet[664]: E1001 20:36:16.435246     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.895145  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:17 old-k8s-version-992970 kubelet[664]: E1001 20:36:17.345729     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.895578  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:19 old-k8s-version-992970 kubelet[664]: E1001 20:36:19.814778     664 pod_workers.go:191] Error syncing pod 71d7d681-3057-4e08-8ce0-dd68e87dfd26 ("storage-provisioner_kube-system(71d7d681-3057-4e08-8ce0-dd68e87dfd26)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(71d7d681-3057-4e08-8ce0-dd68e87dfd26)"
	W1001 20:41:24.896576  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:29 old-k8s-version-992970 kubelet[664]: E1001 20:36:29.844345     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.899079  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:32 old-k8s-version-992970 kubelet[664]: E1001 20:36:32.373391     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:24.899584  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:36 old-k8s-version-992970 kubelet[664]: E1001 20:36:36.434745     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.899808  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:47 old-k8s-version-992970 kubelet[664]: E1001 20:36:47.345072     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.900159  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:49 old-k8s-version-992970 kubelet[664]: E1001 20:36:49.344533     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.900364  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:59 old-k8s-version-992970 kubelet[664]: E1001 20:36:59.345227     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.901014  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:01 old-k8s-version-992970 kubelet[664]: E1001 20:37:01.943400     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.901364  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:06 old-k8s-version-992970 kubelet[664]: E1001 20:37:06.435341     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.903814  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:13 old-k8s-version-992970 kubelet[664]: E1001 20:37:13.354154     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:24.904170  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:17 old-k8s-version-992970 kubelet[664]: E1001 20:37:17.345057     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.904544  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:28 old-k8s-version-992970 kubelet[664]: E1001 20:37:28.349670     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.904766  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:28 old-k8s-version-992970 kubelet[664]: E1001 20:37:28.364204     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.905377  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:43 old-k8s-version-992970 kubelet[664]: E1001 20:37:43.051579     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.905588  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:43 old-k8s-version-992970 kubelet[664]: E1001 20:37:43.345097     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.905990  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:46 old-k8s-version-992970 kubelet[664]: E1001 20:37:46.435452     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.906195  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:58 old-k8s-version-992970 kubelet[664]: E1001 20:37:58.344936     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.906542  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:00 old-k8s-version-992970 kubelet[664]: E1001 20:38:00.344778     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.906893  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:13 old-k8s-version-992970 kubelet[664]: E1001 20:38:13.344526     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.907103  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:13 old-k8s-version-992970 kubelet[664]: E1001 20:38:13.345426     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.908033  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:25 old-k8s-version-992970 kubelet[664]: E1001 20:38:25.345458     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.908408  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:26 old-k8s-version-992970 kubelet[664]: E1001 20:38:26.344683     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.912918  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:36 old-k8s-version-992970 kubelet[664]: E1001 20:38:36.353928     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:24.913282  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:38 old-k8s-version-992970 kubelet[664]: E1001 20:38:38.345052     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.913655  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:50 old-k8s-version-992970 kubelet[664]: E1001 20:38:50.345092     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.913842  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:51 old-k8s-version-992970 kubelet[664]: E1001 20:38:51.344896     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.914173  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:01 old-k8s-version-992970 kubelet[664]: E1001 20:39:01.344547     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.914353  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:02 old-k8s-version-992970 kubelet[664]: E1001 20:39:02.345834     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.914534  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:13 old-k8s-version-992970 kubelet[664]: E1001 20:39:13.344915     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.915125  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:15 old-k8s-version-992970 kubelet[664]: E1001 20:39:15.277864     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.915447  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:16 old-k8s-version-992970 kubelet[664]: E1001 20:39:16.434671     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.915665  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:26 old-k8s-version-992970 kubelet[664]: E1001 20:39:26.350298     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.916033  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:31 old-k8s-version-992970 kubelet[664]: E1001 20:39:31.344495     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.916415  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:38 old-k8s-version-992970 kubelet[664]: E1001 20:39:38.344941     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.916812  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:43 old-k8s-version-992970 kubelet[664]: E1001 20:39:43.344445     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.917030  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:49 old-k8s-version-992970 kubelet[664]: E1001 20:39:49.344870     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.917374  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:54 old-k8s-version-992970 kubelet[664]: E1001 20:39:54.344924     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.917578  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:00 old-k8s-version-992970 kubelet[664]: E1001 20:40:00.348991     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.917925  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:07 old-k8s-version-992970 kubelet[664]: E1001 20:40:07.344581     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.918129  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:11 old-k8s-version-992970 kubelet[664]: E1001 20:40:11.347785     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.918475  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:21 old-k8s-version-992970 kubelet[664]: E1001 20:40:21.344561     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.918682  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:26 old-k8s-version-992970 kubelet[664]: E1001 20:40:26.345878     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.919028  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:32 old-k8s-version-992970 kubelet[664]: E1001 20:40:32.344565     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.919228  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:37 old-k8s-version-992970 kubelet[664]: E1001 20:40:37.344837     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.919578  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:47 old-k8s-version-992970 kubelet[664]: E1001 20:40:47.345723     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.919778  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:49 old-k8s-version-992970 kubelet[664]: E1001 20:40:49.344983     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.920120  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:00 old-k8s-version-992970 kubelet[664]: E1001 20:41:00.344524     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.920347  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:03 old-k8s-version-992970 kubelet[664]: E1001 20:41:03.344940     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.920738  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:11 old-k8s-version-992970 kubelet[664]: E1001 20:41:11.344510     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.920948  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:14 old-k8s-version-992970 kubelet[664]: E1001 20:41:14.350030     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1001 20:41:24.920978  945840 logs.go:123] Gathering logs for dmesg ...
	I1001 20:41:24.921008  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:41:24.943590  945840 logs.go:123] Gathering logs for etcd [1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a] ...
	I1001 20:41:24.943618  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a"
	I1001 20:41:25.014814  945840 logs.go:123] Gathering logs for kube-scheduler [7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251] ...
	I1001 20:41:25.014982  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251"
	I1001 20:41:25.079403  945840 logs.go:123] Gathering logs for kube-controller-manager [328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6] ...
	I1001 20:41:25.079592  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6"
	I1001 20:41:25.156269  945840 logs.go:123] Gathering logs for kube-controller-manager [080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7] ...
	I1001 20:41:25.156352  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7"
	I1001 20:41:25.245466  945840 logs.go:123] Gathering logs for kube-apiserver [5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff] ...
	I1001 20:41:25.245543  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff"
	I1001 20:41:25.311121  945840 logs.go:123] Gathering logs for etcd [2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15] ...
	I1001 20:41:25.311160  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15"
	I1001 20:41:25.374733  945840 logs.go:123] Gathering logs for kube-scheduler [4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6] ...
	I1001 20:41:25.374871  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6"
	I1001 20:41:25.427545  945840 logs.go:123] Gathering logs for kindnet [c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c] ...
	I1001 20:41:25.427570  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c"
	I1001 20:41:25.498817  945840 logs.go:123] Gathering logs for kindnet [f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf] ...
	I1001 20:41:25.498895  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf"
	I1001 20:41:25.566874  945840 logs.go:123] Gathering logs for storage-provisioner [25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10] ...
	I1001 20:41:25.566948  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10"
	I1001 20:41:25.627907  945840 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:41:25.627981  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:41:25.800760  945840 logs.go:123] Gathering logs for kube-apiserver [52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2] ...
	I1001 20:41:25.800834  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2"
	I1001 20:41:25.880615  945840 logs.go:123] Gathering logs for kube-proxy [0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c] ...
	I1001 20:41:25.880769  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c"
	I1001 20:41:25.937302  945840 logs.go:123] Gathering logs for kube-proxy [d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b] ...
	I1001 20:41:25.937384  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b"
	I1001 20:41:25.999488  945840 logs.go:123] Gathering logs for storage-provisioner [24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3] ...
	I1001 20:41:25.999576  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3"
	I1001 20:41:26.047120  945840 out.go:358] Setting ErrFile to fd 2...
	I1001 20:41:26.047192  945840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 20:41:26.047285  945840 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1001 20:41:26.047447  945840 out.go:270]   Oct 01 20:40:49 old-k8s-version-992970 kubelet[664]: E1001 20:40:49.344983     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 01 20:40:49 old-k8s-version-992970 kubelet[664]: E1001 20:40:49.344983     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:26.047491  945840 out.go:270]   Oct 01 20:41:00 old-k8s-version-992970 kubelet[664]: E1001 20:41:00.344524     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	  Oct 01 20:41:00 old-k8s-version-992970 kubelet[664]: E1001 20:41:00.344524     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:26.047589  945840 out.go:270]   Oct 01 20:41:03 old-k8s-version-992970 kubelet[664]: E1001 20:41:03.344940     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 01 20:41:03 old-k8s-version-992970 kubelet[664]: E1001 20:41:03.344940     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:26.047649  945840 out.go:270]   Oct 01 20:41:11 old-k8s-version-992970 kubelet[664]: E1001 20:41:11.344510     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	  Oct 01 20:41:11 old-k8s-version-992970 kubelet[664]: E1001 20:41:11.344510     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:26.047734  945840 out.go:270]   Oct 01 20:41:14 old-k8s-version-992970 kubelet[664]: E1001 20:41:14.350030     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 01 20:41:14 old-k8s-version-992970 kubelet[664]: E1001 20:41:14.350030     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1001 20:41:26.047798  945840 out.go:358] Setting ErrFile to fd 2...
	I1001 20:41:26.047823  945840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:41:36.048948  945840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1001 20:41:36.059936  945840 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1001 20:41:36.062399  945840 out.go:201] 
	W1001 20:41:36.064574  945840 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1001 20:41:36.064614  945840 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1001 20:41:36.064637  945840 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1001 20:41:36.064642  945840 out.go:270] * 
	* 
	W1001 20:41:36.065702  945840 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:41:36.067858  945840 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-992970 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-992970
helpers_test.go:235: (dbg) docker inspect old-k8s-version-992970:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37ed44fefeb1962a98146fd4f653416dd33b6df7ab98234e4604254f3844912f",
	        "Created": "2024-10-01T20:32:27.080661524Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 946043,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-01T20:35:18.547703224Z",
	            "FinishedAt": "2024-10-01T20:35:17.526775775Z"
	        },
	        "Image": "sha256:b5f10d57944829de859b6363a7c57065ccc6b1805dabb3bce283aaecb83f3acc",
	        "ResolvConfPath": "/var/lib/docker/containers/37ed44fefeb1962a98146fd4f653416dd33b6df7ab98234e4604254f3844912f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37ed44fefeb1962a98146fd4f653416dd33b6df7ab98234e4604254f3844912f/hostname",
	        "HostsPath": "/var/lib/docker/containers/37ed44fefeb1962a98146fd4f653416dd33b6df7ab98234e4604254f3844912f/hosts",
	        "LogPath": "/var/lib/docker/containers/37ed44fefeb1962a98146fd4f653416dd33b6df7ab98234e4604254f3844912f/37ed44fefeb1962a98146fd4f653416dd33b6df7ab98234e4604254f3844912f-json.log",
	        "Name": "/old-k8s-version-992970",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-992970:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-992970",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/b6a8f37e0dc7d5913c076126c9d2e511ea11487307bad395e294a539ce8a414a-init/diff:/var/lib/docker/overlay2/bda54826f89b5827b169734fdf2fa880f8697dc2c03a301f63e7d6df420607d8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b6a8f37e0dc7d5913c076126c9d2e511ea11487307bad395e294a539ce8a414a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b6a8f37e0dc7d5913c076126c9d2e511ea11487307bad395e294a539ce8a414a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b6a8f37e0dc7d5913c076126c9d2e511ea11487307bad395e294a539ce8a414a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-992970",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-992970/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-992970",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-992970",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-992970",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8a4c7e078d0f406fd913d319039d198dc7617967d7b8f05f9a75fcc6adbf273",
	            "SandboxKey": "/var/run/docker/netns/f8a4c7e078d0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33830"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-992970": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "805bafacc58f04edf15b5c16f47f9a20178d95c93d2c4c2fe3be2b7686bb8750",
	                    "EndpointID": "ceb1dd2e0e8e61c91e3d17d25032cba16d4c31d24988649c8542ce5462db929a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-992970",
	                        "37ed44fefeb1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-992970 -n old-k8s-version-992970
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-992970 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-992970 logs -n 25: (3.136330368s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-253225                              | cert-expiration-253225   | jenkins | v1.34.0 | 01 Oct 24 20:31 UTC | 01 Oct 24 20:31 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-822393                               | force-systemd-env-822393 | jenkins | v1.34.0 | 01 Oct 24 20:31 UTC | 01 Oct 24 20:31 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-822393                            | force-systemd-env-822393 | jenkins | v1.34.0 | 01 Oct 24 20:31 UTC | 01 Oct 24 20:31 UTC |
	| start   | -p cert-options-232828                                 | cert-options-232828      | jenkins | v1.34.0 | 01 Oct 24 20:31 UTC | 01 Oct 24 20:32 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-232828 ssh                                | cert-options-232828      | jenkins | v1.34.0 | 01 Oct 24 20:32 UTC | 01 Oct 24 20:32 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-232828 -- sudo                         | cert-options-232828      | jenkins | v1.34.0 | 01 Oct 24 20:32 UTC | 01 Oct 24 20:32 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-232828                                 | cert-options-232828      | jenkins | v1.34.0 | 01 Oct 24 20:32 UTC | 01 Oct 24 20:32 UTC |
	| start   | -p old-k8s-version-992970                              | old-k8s-version-992970   | jenkins | v1.34.0 | 01 Oct 24 20:32 UTC | 01 Oct 24 20:34 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-253225                              | cert-expiration-253225   | jenkins | v1.34.0 | 01 Oct 24 20:34 UTC | 01 Oct 24 20:34 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-253225                              | cert-expiration-253225   | jenkins | v1.34.0 | 01 Oct 24 20:34 UTC | 01 Oct 24 20:34 UTC |
	| start   | -p no-preload-381888                                   | no-preload-381888        | jenkins | v1.34.0 | 01 Oct 24 20:34 UTC | 01 Oct 24 20:36 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-992970        | old-k8s-version-992970   | jenkins | v1.34.0 | 01 Oct 24 20:35 UTC | 01 Oct 24 20:35 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-992970                              | old-k8s-version-992970   | jenkins | v1.34.0 | 01 Oct 24 20:35 UTC | 01 Oct 24 20:35 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-992970             | old-k8s-version-992970   | jenkins | v1.34.0 | 01 Oct 24 20:35 UTC | 01 Oct 24 20:35 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-992970                              | old-k8s-version-992970   | jenkins | v1.34.0 | 01 Oct 24 20:35 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-381888             | no-preload-381888        | jenkins | v1.34.0 | 01 Oct 24 20:36 UTC | 01 Oct 24 20:36 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-381888                                   | no-preload-381888        | jenkins | v1.34.0 | 01 Oct 24 20:36 UTC | 01 Oct 24 20:36 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-381888                  | no-preload-381888        | jenkins | v1.34.0 | 01 Oct 24 20:36 UTC | 01 Oct 24 20:36 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-381888                                   | no-preload-381888        | jenkins | v1.34.0 | 01 Oct 24 20:36 UTC | 01 Oct 24 20:40 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| image   | no-preload-381888 image list                           | no-preload-381888        | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-381888                                   | no-preload-381888        | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-381888                                   | no-preload-381888        | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-381888                                   | no-preload-381888        | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	| delete  | -p no-preload-381888                                   | no-preload-381888        | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC | 01 Oct 24 20:41 UTC |
	| start   | -p embed-certs-734252                                  | embed-certs-734252       | jenkins | v1.34.0 | 01 Oct 24 20:41 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 20:41:12
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 20:41:12.475086  956726 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:41:12.475310  956726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:41:12.475319  956726 out.go:358] Setting ErrFile to fd 2...
	I1001 20:41:12.475325  956726 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:41:12.475703  956726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
	I1001 20:41:12.476326  956726 out.go:352] Setting JSON to false
	I1001 20:41:12.477561  956726 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15820,"bootTime":1727799453,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 20:41:12.477651  956726 start.go:139] virtualization:  
	I1001 20:41:12.482125  956726 out.go:177] * [embed-certs-734252] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1001 20:41:12.488639  956726 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:41:12.488881  956726 notify.go:220] Checking for updates...
	I1001 20:41:12.493361  956726 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:41:12.502671  956726 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	I1001 20:41:12.504624  956726 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	I1001 20:41:12.506326  956726 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 20:41:12.508185  956726 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:41:12.510365  956726 config.go:182] Loaded profile config "old-k8s-version-992970": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1001 20:41:12.510456  956726 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:41:12.555289  956726 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 20:41:12.555399  956726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 20:41:12.646892  956726 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-01 20:41:12.636126535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 20:41:12.647015  956726 docker.go:318] overlay module found
	I1001 20:41:12.649140  956726 out.go:177] * Using the docker driver based on user configuration
	I1001 20:41:12.650902  956726 start.go:297] selected driver: docker
	I1001 20:41:12.650924  956726 start.go:901] validating driver "docker" against <nil>
	I1001 20:41:12.650954  956726 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:41:12.651965  956726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 20:41:12.735567  956726 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-01 20:41:12.725167565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 20:41:12.735785  956726 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 20:41:12.736023  956726 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1001 20:41:12.738045  956726 out.go:177] * Using Docker driver with root privileges
	I1001 20:41:12.739699  956726 cni.go:84] Creating CNI manager for ""
	I1001 20:41:12.739775  956726 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 20:41:12.739789  956726 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 20:41:12.739867  956726 start.go:340] cluster config:
	{Name:embed-certs-734252 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-734252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contai
nerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:41:12.742461  956726 out.go:177] * Starting "embed-certs-734252" primary control-plane node in "embed-certs-734252" cluster
	I1001 20:41:12.744183  956726 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1001 20:41:12.746213  956726 out.go:177] * Pulling base image v0.0.45-1727731891-master ...
	I1001 20:41:12.747854  956726 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1001 20:41:12.747902  956726 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1001 20:41:12.747911  956726 cache.go:56] Caching tarball of preloaded images
	I1001 20:41:12.747999  956726 preload.go:172] Found /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1001 20:41:12.748008  956726 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1001 20:41:12.748126  956726 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/config.json ...
	I1001 20:41:12.748146  956726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/config.json: {Name:mke0fa237e1e001c5c73561663fd1bbcaaef9373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:41:12.748305  956726 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 20:41:12.801283  956726 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon, skipping pull
	I1001 20:41:12.801308  956726 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in daemon, skipping load
	I1001 20:41:12.801337  956726 cache.go:194] Successfully downloaded all kic artifacts
	I1001 20:41:12.801370  956726 start.go:360] acquireMachinesLock for embed-certs-734252: {Name:mk259eafcd1919be6f4cbcced9e6b7742b222436 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1001 20:41:12.801483  956726 start.go:364] duration metric: took 92.436µs to acquireMachinesLock for "embed-certs-734252"
	I1001 20:41:12.801516  956726 start.go:93] Provisioning new machine with config: &{Name:embed-certs-734252 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-734252 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1001 20:41:12.801595  956726 start.go:125] createHost starting for "" (driver="docker")
	I1001 20:41:10.415500  945840 pod_ready.go:103] pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace has status "Ready":"False"
	I1001 20:41:11.416776  945840 pod_ready.go:82] duration metric: took 4m0.007627658s for pod "metrics-server-9975d5f86-g89nw" in "kube-system" namespace to be "Ready" ...
	E1001 20:41:11.416797  945840 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1001 20:41:11.416806  945840 pod_ready.go:39] duration metric: took 5m25.221237449s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1001 20:41:11.416819  945840 api_server.go:52] waiting for apiserver process to appear ...
	I1001 20:41:11.416849  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:41:11.416907  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:41:11.474408  945840 cri.go:89] found id: "52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2"
	I1001 20:41:11.474428  945840 cri.go:89] found id: "5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff"
	I1001 20:41:11.474433  945840 cri.go:89] found id: ""
	I1001 20:41:11.474440  945840 logs.go:276] 2 containers: [52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2 5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff]
	I1001 20:41:11.474502  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.479240  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.483755  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1001 20:41:11.483831  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:41:11.577187  945840 cri.go:89] found id: "2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15"
	I1001 20:41:11.577210  945840 cri.go:89] found id: "1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a"
	I1001 20:41:11.577215  945840 cri.go:89] found id: ""
	I1001 20:41:11.577223  945840 logs.go:276] 2 containers: [2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15 1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a]
	I1001 20:41:11.577279  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.586090  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.591567  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1001 20:41:11.591710  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:41:11.660526  945840 cri.go:89] found id: "9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648"
	I1001 20:41:11.660601  945840 cri.go:89] found id: "79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763"
	I1001 20:41:11.660620  945840 cri.go:89] found id: ""
	I1001 20:41:11.660642  945840 logs.go:276] 2 containers: [9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648 79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763]
	I1001 20:41:11.660722  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.664752  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.668594  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:41:11.668718  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:41:11.739073  945840 cri.go:89] found id: "4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6"
	I1001 20:41:11.739147  945840 cri.go:89] found id: "7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251"
	I1001 20:41:11.739167  945840 cri.go:89] found id: ""
	I1001 20:41:11.739190  945840 logs.go:276] 2 containers: [4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6 7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251]
	I1001 20:41:11.739274  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.742663  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.746257  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:41:11.746327  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:41:11.801184  945840 cri.go:89] found id: "0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c"
	I1001 20:41:11.801209  945840 cri.go:89] found id: "d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b"
	I1001 20:41:11.801214  945840 cri.go:89] found id: ""
	I1001 20:41:11.801221  945840 logs.go:276] 2 containers: [0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b]
	I1001 20:41:11.801277  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.804844  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.808053  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:41:11.808123  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:41:11.857078  945840 cri.go:89] found id: "328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6"
	I1001 20:41:11.857153  945840 cri.go:89] found id: "080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7"
	I1001 20:41:11.857173  945840 cri.go:89] found id: ""
	I1001 20:41:11.857194  945840 logs.go:276] 2 containers: [328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6 080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7]
	I1001 20:41:11.857278  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.861068  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.864473  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1001 20:41:11.864591  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:41:11.912862  945840 cri.go:89] found id: "c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c"
	I1001 20:41:11.912884  945840 cri.go:89] found id: "f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf"
	I1001 20:41:11.912889  945840 cri.go:89] found id: ""
	I1001 20:41:11.912896  945840 logs.go:276] 2 containers: [c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf]
	I1001 20:41:11.912952  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.917077  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.921003  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:41:11.921079  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:41:11.978864  945840 cri.go:89] found id: "25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10"
	I1001 20:41:11.978891  945840 cri.go:89] found id: "24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3"
	I1001 20:41:11.978896  945840 cri.go:89] found id: ""
	I1001 20:41:11.978903  945840 logs.go:276] 2 containers: [25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10 24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3]
	I1001 20:41:11.978965  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.982666  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:11.985955  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:41:11.986029  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:41:12.051716  945840 cri.go:89] found id: "9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3"
	I1001 20:41:12.051755  945840 cri.go:89] found id: ""
	I1001 20:41:12.051764  945840 logs.go:276] 1 containers: [9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3]
	I1001 20:41:12.051819  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:12.058110  945840 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:41:12.058132  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:41:12.242264  945840 logs.go:123] Gathering logs for kube-apiserver [5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff] ...
	I1001 20:41:12.242302  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff"
	I1001 20:41:12.341006  945840 logs.go:123] Gathering logs for kube-scheduler [4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6] ...
	I1001 20:41:12.341039  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6"
	I1001 20:41:12.411358  945840 logs.go:123] Gathering logs for container status ...
	I1001 20:41:12.411385  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:41:12.480370  945840 logs.go:123] Gathering logs for kubelet ...
	I1001 20:41:12.480501  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 20:41:12.550748  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.979627     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-bktfz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-bktfz" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.551045  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.979931     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hcc5p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hcc5p" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.551607  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.980146     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.551863  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.983725     664 reflector.go:138] object-"kube-system"/"coredns-token-q5kkb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-q5kkb" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.552196  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.983973     664 reflector.go:138] object-"kube-system"/"metrics-server-token-qwlv6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-qwlv6" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.552442  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.984045     664 reflector.go:138] object-"default"/"default-token-h86wg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-h86wg" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.552681  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.984119     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.552982  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.984187     664 reflector.go:138] object-"kube-system"/"kindnet-token-pv9f8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-pv9f8" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.564423  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:48 old-k8s-version-992970 kubelet[664]: E1001 20:35:48.311000     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:12.565307  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:48 old-k8s-version-992970 kubelet[664]: E1001 20:35:48.692405     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.568137  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:02 old-k8s-version-992970 kubelet[664]: E1001 20:36:02.354833     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:12.568580  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:03 old-k8s-version-992970 kubelet[664]: E1001 20:36:03.679908     664 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-npz8p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-npz8p" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:12.574176  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:11 old-k8s-version-992970 kubelet[664]: E1001 20:36:11.784869     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.574810  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:12 old-k8s-version-992970 kubelet[664]: E1001 20:36:12.783230     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.575603  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:16 old-k8s-version-992970 kubelet[664]: E1001 20:36:16.435246     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.575826  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:17 old-k8s-version-992970 kubelet[664]: E1001 20:36:17.345729     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.576304  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:19 old-k8s-version-992970 kubelet[664]: E1001 20:36:19.814778     664 pod_workers.go:191] Error syncing pod 71d7d681-3057-4e08-8ce0-dd68e87dfd26 ("storage-provisioner_kube-system(71d7d681-3057-4e08-8ce0-dd68e87dfd26)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(71d7d681-3057-4e08-8ce0-dd68e87dfd26)"
	W1001 20:41:12.577268  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:29 old-k8s-version-992970 kubelet[664]: E1001 20:36:29.844345     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.579838  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:32 old-k8s-version-992970 kubelet[664]: E1001 20:36:32.373391     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:12.580336  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:36 old-k8s-version-992970 kubelet[664]: E1001 20:36:36.434745     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.580572  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:47 old-k8s-version-992970 kubelet[664]: E1001 20:36:47.345072     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.580931  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:49 old-k8s-version-992970 kubelet[664]: E1001 20:36:49.344533     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.581147  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:59 old-k8s-version-992970 kubelet[664]: E1001 20:36:59.345227     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.581760  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:01 old-k8s-version-992970 kubelet[664]: E1001 20:37:01.943400     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.582112  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:06 old-k8s-version-992970 kubelet[664]: E1001 20:37:06.435341     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.584773  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:13 old-k8s-version-992970 kubelet[664]: E1001 20:37:13.354154     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:12.585163  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:17 old-k8s-version-992970 kubelet[664]: E1001 20:37:17.345057     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.585546  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:28 old-k8s-version-992970 kubelet[664]: E1001 20:37:28.349670     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.585787  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:28 old-k8s-version-992970 kubelet[664]: E1001 20:37:28.364204     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.586466  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:43 old-k8s-version-992970 kubelet[664]: E1001 20:37:43.051579     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.586700  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:43 old-k8s-version-992970 kubelet[664]: E1001 20:37:43.345097     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.587094  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:46 old-k8s-version-992970 kubelet[664]: E1001 20:37:46.435452     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.587332  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:58 old-k8s-version-992970 kubelet[664]: E1001 20:37:58.344936     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.587728  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:00 old-k8s-version-992970 kubelet[664]: E1001 20:38:00.344778     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.588153  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:13 old-k8s-version-992970 kubelet[664]: E1001 20:38:13.344526     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.588395  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:13 old-k8s-version-992970 kubelet[664]: E1001 20:38:13.345426     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.588649  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:25 old-k8s-version-992970 kubelet[664]: E1001 20:38:25.345458     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.589054  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:26 old-k8s-version-992970 kubelet[664]: E1001 20:38:26.344683     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.591545  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:36 old-k8s-version-992970 kubelet[664]: E1001 20:38:36.353928     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:12.591913  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:38 old-k8s-version-992970 kubelet[664]: E1001 20:38:38.345052     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.592266  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:50 old-k8s-version-992970 kubelet[664]: E1001 20:38:50.345092     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.592488  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:51 old-k8s-version-992970 kubelet[664]: E1001 20:38:51.344896     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.592849  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:01 old-k8s-version-992970 kubelet[664]: E1001 20:39:01.344547     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.593066  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:02 old-k8s-version-992970 kubelet[664]: E1001 20:39:02.345834     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.593278  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:13 old-k8s-version-992970 kubelet[664]: E1001 20:39:13.344915     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.593891  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:15 old-k8s-version-992970 kubelet[664]: E1001 20:39:15.277864     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.594242  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:16 old-k8s-version-992970 kubelet[664]: E1001 20:39:16.434671     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.594496  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:26 old-k8s-version-992970 kubelet[664]: E1001 20:39:26.350298     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.594854  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:31 old-k8s-version-992970 kubelet[664]: E1001 20:39:31.344495     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.595324  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:38 old-k8s-version-992970 kubelet[664]: E1001 20:39:38.344941     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.595744  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:43 old-k8s-version-992970 kubelet[664]: E1001 20:39:43.344445     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.595957  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:49 old-k8s-version-992970 kubelet[664]: E1001 20:39:49.344870     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.599123  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:54 old-k8s-version-992970 kubelet[664]: E1001 20:39:54.344924     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.599369  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:00 old-k8s-version-992970 kubelet[664]: E1001 20:40:00.348991     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.599828  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:07 old-k8s-version-992970 kubelet[664]: E1001 20:40:07.344581     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.600062  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:11 old-k8s-version-992970 kubelet[664]: E1001 20:40:11.347785     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.600816  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:21 old-k8s-version-992970 kubelet[664]: E1001 20:40:21.344561     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.601059  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:26 old-k8s-version-992970 kubelet[664]: E1001 20:40:26.345878     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.601415  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:32 old-k8s-version-992970 kubelet[664]: E1001 20:40:32.344565     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.601626  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:37 old-k8s-version-992970 kubelet[664]: E1001 20:40:37.344837     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.601998  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:47 old-k8s-version-992970 kubelet[664]: E1001 20:40:47.345723     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.602224  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:49 old-k8s-version-992970 kubelet[664]: E1001 20:40:49.344983     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.602617  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:00 old-k8s-version-992970 kubelet[664]: E1001 20:41:00.344524     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:12.602837  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:03 old-k8s-version-992970 kubelet[664]: E1001 20:41:03.344940     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:12.603193  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:11 old-k8s-version-992970 kubelet[664]: E1001 20:41:11.344510     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	I1001 20:41:12.603225  945840 logs.go:123] Gathering logs for kube-scheduler [7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251] ...
	I1001 20:41:12.603254  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251"
	I1001 20:41:12.658419  945840 logs.go:123] Gathering logs for kube-proxy [0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c] ...
	I1001 20:41:12.658497  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c"
	I1001 20:41:12.710694  945840 logs.go:123] Gathering logs for kube-controller-manager [328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6] ...
	I1001 20:41:12.710721  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6"
	I1001 20:41:12.809260  945840 logs.go:123] Gathering logs for kube-controller-manager [080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7] ...
	I1001 20:41:12.809297  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7"
	I1001 20:41:12.935124  945840 logs.go:123] Gathering logs for kindnet [c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c] ...
	I1001 20:41:12.935346  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c"
	I1001 20:41:13.008699  945840 logs.go:123] Gathering logs for kube-apiserver [52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2] ...
	I1001 20:41:13.008767  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2"
	I1001 20:41:13.086831  945840 logs.go:123] Gathering logs for etcd [1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a] ...
	I1001 20:41:13.086867  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a"
	I1001 20:41:13.151346  945840 logs.go:123] Gathering logs for coredns [79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763] ...
	I1001 20:41:13.151382  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763"
	I1001 20:41:12.803781  956726 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1001 20:41:12.804006  956726 start.go:159] libmachine.API.Create for "embed-certs-734252" (driver="docker")
	I1001 20:41:12.804038  956726 client.go:168] LocalClient.Create starting
	I1001 20:41:12.804112  956726 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem
	I1001 20:41:12.804150  956726 main.go:141] libmachine: Decoding PEM data...
	I1001 20:41:12.804165  956726 main.go:141] libmachine: Parsing certificate...
	I1001 20:41:12.804223  956726 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19736-735883/.minikube/certs/cert.pem
	I1001 20:41:12.804248  956726 main.go:141] libmachine: Decoding PEM data...
	I1001 20:41:12.804263  956726 main.go:141] libmachine: Parsing certificate...
	I1001 20:41:12.804999  956726 cli_runner.go:164] Run: docker network inspect embed-certs-734252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1001 20:41:12.830731  956726 cli_runner.go:211] docker network inspect embed-certs-734252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1001 20:41:12.830833  956726 network_create.go:284] running [docker network inspect embed-certs-734252] to gather additional debugging logs...
	I1001 20:41:12.830848  956726 cli_runner.go:164] Run: docker network inspect embed-certs-734252
	W1001 20:41:12.856400  956726 cli_runner.go:211] docker network inspect embed-certs-734252 returned with exit code 1
	I1001 20:41:12.856427  956726 network_create.go:287] error running [docker network inspect embed-certs-734252]: docker network inspect embed-certs-734252: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-734252 not found
	I1001 20:41:12.856440  956726 network_create.go:289] output of [docker network inspect embed-certs-734252]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-734252 not found
	
	** /stderr **
	I1001 20:41:12.856612  956726 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 20:41:12.871508  956726 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2a68ee21f9af IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a0:14:b4:5c} reservation:<nil>}
	I1001 20:41:12.872164  956726 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-6dc0504630e6 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:3b:54:56:41} reservation:<nil>}
	I1001 20:41:12.872943  956726 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d8f2f5c79a77 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:d2:57:30:3e} reservation:<nil>}
	I1001 20:41:12.873494  956726 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-805bafacc58f IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:35:b3:03:43} reservation:<nil>}
	I1001 20:41:12.874192  956726 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018a3090}
	I1001 20:41:12.874228  956726 network_create.go:124] attempt to create docker network embed-certs-734252 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1001 20:41:12.874303  956726 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-734252 embed-certs-734252
	I1001 20:41:12.967934  956726 network_create.go:108] docker network embed-certs-734252 192.168.85.0/24 created
	I1001 20:41:12.967968  956726 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-734252" container
	I1001 20:41:12.968054  956726 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1001 20:41:12.984645  956726 cli_runner.go:164] Run: docker volume create embed-certs-734252 --label name.minikube.sigs.k8s.io=embed-certs-734252 --label created_by.minikube.sigs.k8s.io=true
	I1001 20:41:13.003271  956726 oci.go:103] Successfully created a docker volume embed-certs-734252
	I1001 20:41:13.003365  956726 cli_runner.go:164] Run: docker run --rm --name embed-certs-734252-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-734252 --entrypoint /usr/bin/test -v embed-certs-734252:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -d /var/lib
	I1001 20:41:13.764884  956726 oci.go:107] Successfully prepared a docker volume embed-certs-734252
	I1001 20:41:13.764933  956726 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1001 20:41:13.764954  956726 kic.go:194] Starting extracting preloaded images to volume ...
	I1001 20:41:13.765028  956726 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-734252:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir
	I1001 20:41:13.200242  945840 logs.go:123] Gathering logs for storage-provisioner [25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10] ...
	I1001 20:41:13.200279  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10"
	I1001 20:41:13.247808  945840 logs.go:123] Gathering logs for storage-provisioner [24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3] ...
	I1001 20:41:13.247837  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3"
	I1001 20:41:13.305453  945840 logs.go:123] Gathering logs for kubernetes-dashboard [9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3] ...
	I1001 20:41:13.305482  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3"
	I1001 20:41:13.431378  945840 logs.go:123] Gathering logs for containerd ...
	I1001 20:41:13.431414  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1001 20:41:13.507547  945840 logs.go:123] Gathering logs for dmesg ...
	I1001 20:41:13.507624  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:41:13.528688  945840 logs.go:123] Gathering logs for etcd [2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15] ...
	I1001 20:41:13.528715  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15"
	I1001 20:41:13.643064  945840 logs.go:123] Gathering logs for coredns [9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648] ...
	I1001 20:41:13.643141  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648"
	I1001 20:41:13.718811  945840 logs.go:123] Gathering logs for kube-proxy [d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b] ...
	I1001 20:41:13.718840  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b"
	I1001 20:41:13.790938  945840 logs.go:123] Gathering logs for kindnet [f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf] ...
	I1001 20:41:13.790981  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf"
	I1001 20:41:13.864634  945840 out.go:358] Setting ErrFile to fd 2...
	I1001 20:41:13.864656  945840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 20:41:13.864713  945840 out.go:270] X Problems detected in kubelet:
	W1001 20:41:13.864723  945840 out.go:270]   Oct 01 20:40:47 old-k8s-version-992970 kubelet[664]: E1001 20:40:47.345723     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:13.864731  945840 out.go:270]   Oct 01 20:40:49 old-k8s-version-992970 kubelet[664]: E1001 20:40:49.344983     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:13.864739  945840 out.go:270]   Oct 01 20:41:00 old-k8s-version-992970 kubelet[664]: E1001 20:41:00.344524     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:13.864745  945840 out.go:270]   Oct 01 20:41:03 old-k8s-version-992970 kubelet[664]: E1001 20:41:03.344940     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:13.864755  945840 out.go:270]   Oct 01 20:41:11 old-k8s-version-992970 kubelet[664]: E1001 20:41:11.344510     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	I1001 20:41:13.864760  945840 out.go:358] Setting ErrFile to fd 2...
	I1001 20:41:13.864766  945840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:41:18.280245  956726 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-734252:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 -I lz4 -xf /preloaded.tar -C /extractDir: (4.515163905s)
	I1001 20:41:18.280273  956726 kic.go:203] duration metric: took 4.51531495s to extract preloaded images to volume ...
	W1001 20:41:18.280404  956726 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1001 20:41:18.280649  956726 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1001 20:41:18.338999  956726 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-734252 --name embed-certs-734252 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-734252 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-734252 --network embed-certs-734252 --ip 192.168.85.2 --volume embed-certs-734252:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122
	I1001 20:41:18.675645  956726 cli_runner.go:164] Run: docker container inspect embed-certs-734252 --format={{.State.Running}}
	I1001 20:41:18.697365  956726 cli_runner.go:164] Run: docker container inspect embed-certs-734252 --format={{.State.Status}}
	I1001 20:41:18.722210  956726 cli_runner.go:164] Run: docker exec embed-certs-734252 stat /var/lib/dpkg/alternatives/iptables
	I1001 20:41:18.800090  956726 oci.go:144] the created container "embed-certs-734252" has a running status.
	I1001 20:41:18.800127  956726 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19736-735883/.minikube/machines/embed-certs-734252/id_rsa...
	I1001 20:41:19.477035  956726 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19736-735883/.minikube/machines/embed-certs-734252/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1001 20:41:19.508557  956726 cli_runner.go:164] Run: docker container inspect embed-certs-734252 --format={{.State.Status}}
	I1001 20:41:19.549449  956726 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1001 20:41:19.549472  956726 kic_runner.go:114] Args: [docker exec --privileged embed-certs-734252 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1001 20:41:19.638289  956726 cli_runner.go:164] Run: docker container inspect embed-certs-734252 --format={{.State.Status}}
	I1001 20:41:19.664278  956726 machine.go:93] provisionDockerMachine start ...
	I1001 20:41:19.664388  956726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-734252
	I1001 20:41:19.683305  956726 main.go:141] libmachine: Using SSH client type: native
	I1001 20:41:19.683644  956726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1001 20:41:19.683663  956726 main.go:141] libmachine: About to run SSH command:
	hostname
	I1001 20:41:19.836298  956726 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-734252
	
	I1001 20:41:19.836327  956726 ubuntu.go:169] provisioning hostname "embed-certs-734252"
	I1001 20:41:19.836395  956726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-734252
	I1001 20:41:19.855272  956726 main.go:141] libmachine: Using SSH client type: native
	I1001 20:41:19.855561  956726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1001 20:41:19.855582  956726 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-734252 && echo "embed-certs-734252" | sudo tee /etc/hostname
	I1001 20:41:20.012591  956726 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-734252
	
	I1001 20:41:20.012733  956726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-734252
	I1001 20:41:20.035004  956726 main.go:141] libmachine: Using SSH client type: native
	I1001 20:41:20.035257  956726 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1001 20:41:20.035277  956726 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-734252' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-734252/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-734252' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1001 20:41:20.176730  956726 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1001 20:41:20.176759  956726 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19736-735883/.minikube CaCertPath:/home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19736-735883/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19736-735883/.minikube}
	I1001 20:41:20.176798  956726 ubuntu.go:177] setting up certificates
	I1001 20:41:20.176810  956726 provision.go:84] configureAuth start
	I1001 20:41:20.176886  956726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-734252
	I1001 20:41:20.197607  956726 provision.go:143] copyHostCerts
	I1001 20:41:20.197673  956726 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-735883/.minikube/ca.pem, removing ...
	I1001 20:41:20.197687  956726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-735883/.minikube/ca.pem
	I1001 20:41:20.197756  956726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19736-735883/.minikube/ca.pem (1078 bytes)
	I1001 20:41:20.197845  956726 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-735883/.minikube/cert.pem, removing ...
	I1001 20:41:20.197855  956726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-735883/.minikube/cert.pem
	I1001 20:41:20.197881  956726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19736-735883/.minikube/cert.pem (1123 bytes)
	I1001 20:41:20.197936  956726 exec_runner.go:144] found /home/jenkins/minikube-integration/19736-735883/.minikube/key.pem, removing ...
	I1001 20:41:20.197945  956726 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19736-735883/.minikube/key.pem
	I1001 20:41:20.197970  956726 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19736-735883/.minikube/key.pem (1679 bytes)
	I1001 20:41:20.198021  956726 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19736-735883/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca-key.pem org=jenkins.embed-certs-734252 san=[127.0.0.1 192.168.85.2 embed-certs-734252 localhost minikube]
	I1001 20:41:20.548624  956726 provision.go:177] copyRemoteCerts
	I1001 20:41:20.548740  956726 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1001 20:41:20.548805  956726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-734252
	I1001 20:41:20.564897  956726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/embed-certs-734252/id_rsa Username:docker}
	I1001 20:41:20.673395  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1001 20:41:20.700306  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1001 20:41:20.725468  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1001 20:41:20.749868  956726 provision.go:87] duration metric: took 573.028588ms to configureAuth
	I1001 20:41:20.749895  956726 ubuntu.go:193] setting minikube options for container-runtime
	I1001 20:41:20.750078  956726 config.go:182] Loaded profile config "embed-certs-734252": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 20:41:20.750092  956726 machine.go:96] duration metric: took 1.085795457s to provisionDockerMachine
	I1001 20:41:20.750099  956726 client.go:171] duration metric: took 7.946050119s to LocalClient.Create
	I1001 20:41:20.750113  956726 start.go:167] duration metric: took 7.94610734s to libmachine.API.Create "embed-certs-734252"
	I1001 20:41:20.750124  956726 start.go:293] postStartSetup for "embed-certs-734252" (driver="docker")
	I1001 20:41:20.750134  956726 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1001 20:41:20.750201  956726 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1001 20:41:20.750250  956726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-734252
	I1001 20:41:20.765942  956726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/embed-certs-734252/id_rsa Username:docker}
	I1001 20:41:20.861560  956726 ssh_runner.go:195] Run: cat /etc/os-release
	I1001 20:41:20.864600  956726 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1001 20:41:20.864639  956726 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1001 20:41:20.864650  956726 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1001 20:41:20.864658  956726 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1001 20:41:20.864668  956726 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-735883/.minikube/addons for local assets ...
	I1001 20:41:20.864727  956726 filesync.go:126] Scanning /home/jenkins/minikube-integration/19736-735883/.minikube/files for local assets ...
	I1001 20:41:20.864813  956726 filesync.go:149] local asset: /home/jenkins/minikube-integration/19736-735883/.minikube/files/etc/ssl/certs/7412642.pem -> 7412642.pem in /etc/ssl/certs
	I1001 20:41:20.864931  956726 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1001 20:41:20.873566  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/files/etc/ssl/certs/7412642.pem --> /etc/ssl/certs/7412642.pem (1708 bytes)
	I1001 20:41:20.897846  956726 start.go:296] duration metric: took 147.705304ms for postStartSetup
	I1001 20:41:20.898224  956726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-734252
	I1001 20:41:20.914848  956726 profile.go:143] Saving config to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/config.json ...
	I1001 20:41:20.915122  956726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 20:41:20.915164  956726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-734252
	I1001 20:41:20.931391  956726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/embed-certs-734252/id_rsa Username:docker}
	I1001 20:41:21.025322  956726 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1001 20:41:21.029581  956726 start.go:128] duration metric: took 8.227968926s to createHost
	I1001 20:41:21.029605  956726 start.go:83] releasing machines lock for "embed-certs-734252", held for 8.228108688s
	I1001 20:41:21.029672  956726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-734252
	I1001 20:41:21.051457  956726 ssh_runner.go:195] Run: cat /version.json
	I1001 20:41:21.051470  956726 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1001 20:41:21.051509  956726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-734252
	I1001 20:41:21.051531  956726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-734252
	I1001 20:41:21.073715  956726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/embed-certs-734252/id_rsa Username:docker}
	I1001 20:41:21.085623  956726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/embed-certs-734252/id_rsa Username:docker}
	I1001 20:41:21.299087  956726 ssh_runner.go:195] Run: systemctl --version
	I1001 20:41:21.303493  956726 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1001 20:41:21.307829  956726 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1001 20:41:21.333353  956726 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1001 20:41:21.333436  956726 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1001 20:41:21.362892  956726 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1001 20:41:21.362913  956726 start.go:495] detecting cgroup driver to use...
	I1001 20:41:21.362946  956726 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1001 20:41:21.362996  956726 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1001 20:41:21.376047  956726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1001 20:41:21.387758  956726 docker.go:217] disabling cri-docker service (if available) ...
	I1001 20:41:21.387875  956726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1001 20:41:21.402287  956726 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1001 20:41:21.417271  956726 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1001 20:41:21.506778  956726 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1001 20:41:21.612548  956726 docker.go:233] disabling docker service ...
	I1001 20:41:21.612665  956726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1001 20:41:21.640940  956726 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1001 20:41:21.654142  956726 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1001 20:41:21.754550  956726 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1001 20:41:21.848737  956726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1001 20:41:21.866969  956726 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1001 20:41:21.886084  956726 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1001 20:41:21.897099  956726 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1001 20:41:21.907177  956726 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1001 20:41:21.907308  956726 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1001 20:41:21.917843  956726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 20:41:21.928244  956726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1001 20:41:21.938827  956726 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1001 20:41:21.950323  956726 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1001 20:41:21.961151  956726 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1001 20:41:21.971663  956726 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1001 20:41:21.983446  956726 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1001 20:41:21.995037  956726 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1001 20:41:22.004852  956726 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1001 20:41:22.014114  956726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:41:22.107123  956726 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1001 20:41:22.241631  956726 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1001 20:41:22.241705  956726 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1001 20:41:22.245901  956726 start.go:563] Will wait 60s for crictl version
	I1001 20:41:22.246009  956726 ssh_runner.go:195] Run: which crictl
	I1001 20:41:22.249931  956726 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1001 20:41:22.289495  956726 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1001 20:41:22.289594  956726 ssh_runner.go:195] Run: containerd --version
	I1001 20:41:22.311729  956726 ssh_runner.go:195] Run: containerd --version
	I1001 20:41:22.338586  956726 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1001 20:41:22.340584  956726 cli_runner.go:164] Run: docker network inspect embed-certs-734252 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1001 20:41:22.373565  956726 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1001 20:41:22.377197  956726 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:41:22.388497  956726 kubeadm.go:883] updating cluster {Name:embed-certs-734252 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-734252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1001 20:41:22.388618  956726 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1001 20:41:22.388679  956726 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:41:22.426951  956726 containerd.go:627] all images are preloaded for containerd runtime.
	I1001 20:41:22.426975  956726 containerd.go:534] Images already preloaded, skipping extraction
	I1001 20:41:22.427046  956726 ssh_runner.go:195] Run: sudo crictl images --output json
	I1001 20:41:22.463446  956726 containerd.go:627] all images are preloaded for containerd runtime.
	I1001 20:41:22.463468  956726 cache_images.go:84] Images are preloaded, skipping loading
	I1001 20:41:22.463476  956726 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I1001 20:41:22.463565  956726 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-734252 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-734252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1001 20:41:22.463635  956726 ssh_runner.go:195] Run: sudo crictl info
	I1001 20:41:22.502565  956726 cni.go:84] Creating CNI manager for ""
	I1001 20:41:22.502591  956726 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 20:41:22.502602  956726 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1001 20:41:22.502626  956726 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-734252 NodeName:embed-certs-734252 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1001 20:41:22.502753  956726 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-734252"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1001 20:41:22.502831  956726 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1001 20:41:22.511552  956726 binaries.go:44] Found k8s binaries, skipping transfer
	I1001 20:41:22.511674  956726 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1001 20:41:22.520512  956726 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1001 20:41:22.540576  956726 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1001 20:41:22.561557  956726 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I1001 20:41:22.580972  956726 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1001 20:41:22.584551  956726 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1001 20:41:22.595478  956726 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1001 20:41:22.689897  956726 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1001 20:41:22.704794  956726 certs.go:68] Setting up /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252 for IP: 192.168.85.2
	I1001 20:41:22.704873  956726 certs.go:194] generating shared ca certs ...
	I1001 20:41:22.704911  956726 certs.go:226] acquiring lock for ca certs: {Name:mk132cf96fd4e71a64bde5e1335b23d155d99f69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:41:22.705136  956726 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19736-735883/.minikube/ca.key
	I1001 20:41:22.705215  956726 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.key
	I1001 20:41:22.705254  956726 certs.go:256] generating profile certs ...
	I1001 20:41:22.705348  956726 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/client.key
	I1001 20:41:22.705382  956726 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/client.crt with IP's: []
	I1001 20:41:23.126646  956726 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/client.crt ...
	I1001 20:41:23.126678  956726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/client.crt: {Name:mk8597bc7235c7a6f8896133155d3cd7a01450a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:41:23.126911  956726 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/client.key ...
	I1001 20:41:23.126927  956726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/client.key: {Name:mk38649c41c511daa802609ccb708f9903f15ed5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:41:23.127645  956726 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/apiserver.key.13f6bccc
	I1001 20:41:23.127667  956726 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/apiserver.crt.13f6bccc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1001 20:41:23.580963  956726 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/apiserver.crt.13f6bccc ...
	I1001 20:41:23.581000  956726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/apiserver.crt.13f6bccc: {Name:mkb62bf05e95589743d99537906df99bbc728718 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:41:23.581189  956726 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/apiserver.key.13f6bccc ...
	I1001 20:41:23.581204  956726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/apiserver.key.13f6bccc: {Name:mkc90a1aa82dcc5a0e1deb1723826999b84f177b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:41:23.581822  956726 certs.go:381] copying /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/apiserver.crt.13f6bccc -> /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/apiserver.crt
	I1001 20:41:23.581909  956726 certs.go:385] copying /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/apiserver.key.13f6bccc -> /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/apiserver.key
	I1001 20:41:23.581969  956726 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/proxy-client.key
	I1001 20:41:23.581988  956726 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/proxy-client.crt with IP's: []
	I1001 20:41:24.369973  956726 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/proxy-client.crt ...
	I1001 20:41:24.369997  956726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/proxy-client.crt: {Name:mk45dc3219840cacc570c06588d18ab7a4206a23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:41:24.370159  956726 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/proxy-client.key ...
	I1001 20:41:24.370172  956726 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/proxy-client.key: {Name:mkf7941e9e5fe8e7d23ce83e9c8f39caf9ec9851 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1001 20:41:24.372638  956726 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/741264.pem (1338 bytes)
	W1001 20:41:24.372695  956726 certs.go:480] ignoring /home/jenkins/minikube-integration/19736-735883/.minikube/certs/741264_empty.pem, impossibly tiny 0 bytes
	I1001 20:41:24.372708  956726 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca-key.pem (1675 bytes)
	I1001 20:41:24.372732  956726 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/ca.pem (1078 bytes)
	I1001 20:41:24.372758  956726 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/cert.pem (1123 bytes)
	I1001 20:41:24.372792  956726 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/certs/key.pem (1679 bytes)
	I1001 20:41:24.372841  956726 certs.go:484] found cert: /home/jenkins/minikube-integration/19736-735883/.minikube/files/etc/ssl/certs/7412642.pem (1708 bytes)
	I1001 20:41:24.373463  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1001 20:41:24.426907  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1001 20:41:24.466830  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1001 20:41:24.491525  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1001 20:41:24.516989  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1001 20:41:24.541289  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1001 20:41:24.565486  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1001 20:41:24.602792  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/embed-certs-734252/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1001 20:41:24.662725  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/files/etc/ssl/certs/7412642.pem --> /usr/share/ca-certificates/7412642.pem (1708 bytes)
	I1001 20:41:24.690576  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1001 20:41:24.720960  956726 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19736-735883/.minikube/certs/741264.pem --> /usr/share/ca-certificates/741264.pem (1338 bytes)
	I1001 20:41:24.757692  956726 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1001 20:41:24.777997  956726 ssh_runner.go:195] Run: openssl version
	I1001 20:41:24.784241  956726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/741264.pem && ln -fs /usr/share/ca-certificates/741264.pem /etc/ssl/certs/741264.pem"
	I1001 20:41:24.796335  956726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/741264.pem
	I1001 20:41:24.800003  956726 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  1 19:57 /usr/share/ca-certificates/741264.pem
	I1001 20:41:24.800062  956726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/741264.pem
	I1001 20:41:24.809942  956726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/741264.pem /etc/ssl/certs/51391683.0"
	I1001 20:41:24.835605  956726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7412642.pem && ln -fs /usr/share/ca-certificates/7412642.pem /etc/ssl/certs/7412642.pem"
	I1001 20:41:24.852925  956726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7412642.pem
	I1001 20:41:24.857191  956726 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  1 19:57 /usr/share/ca-certificates/7412642.pem
	I1001 20:41:24.857264  956726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7412642.pem
	I1001 20:41:24.865153  956726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7412642.pem /etc/ssl/certs/3ec20f2e.0"
	I1001 20:41:24.875381  956726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1001 20:41:24.885858  956726 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:41:24.890484  956726 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  1 19:47 /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:41:24.890548  956726 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1001 20:41:24.899377  956726 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1001 20:41:24.910987  956726 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1001 20:41:24.915965  956726 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1001 20:41:24.916051  956726 kubeadm.go:392] StartCluster: {Name:embed-certs-734252 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-734252 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIS
erverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:41:24.916153  956726 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1001 20:41:24.916266  956726 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1001 20:41:24.965168  956726 cri.go:89] found id: ""
	I1001 20:41:24.965269  956726 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1001 20:41:24.976291  956726 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1001 20:41:24.985796  956726 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1001 20:41:24.985903  956726 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1001 20:41:25.000014  956726 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1001 20:41:25.000036  956726 kubeadm.go:157] found existing configuration files:
	
	I1001 20:41:25.000127  956726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1001 20:41:25.011808  956726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1001 20:41:25.011909  956726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1001 20:41:25.024373  956726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1001 20:41:25.035451  956726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1001 20:41:25.035572  956726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1001 20:41:25.044617  956726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1001 20:41:25.058922  956726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1001 20:41:25.058994  956726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1001 20:41:25.068883  956726 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1001 20:41:25.083482  956726 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1001 20:41:25.083551  956726 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1001 20:41:25.093651  956726 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1001 20:41:25.187629  956726 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1001 20:41:25.187916  956726 kubeadm.go:310] [preflight] Running pre-flight checks
	I1001 20:41:25.222565  956726 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1001 20:41:25.222666  956726 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1001 20:41:25.222725  956726 kubeadm.go:310] OS: Linux
	I1001 20:41:25.222805  956726 kubeadm.go:310] CGROUPS_CPU: enabled
	I1001 20:41:25.222885  956726 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1001 20:41:25.222963  956726 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1001 20:41:25.223040  956726 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1001 20:41:25.223117  956726 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1001 20:41:25.223193  956726 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1001 20:41:25.223267  956726 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1001 20:41:25.223344  956726 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1001 20:41:25.223422  956726 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1001 20:41:25.327258  956726 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1001 20:41:25.327439  956726 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1001 20:41:25.327562  956726 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1001 20:41:25.336567  956726 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1001 20:41:25.339757  956726 out.go:235]   - Generating certificates and keys ...
	I1001 20:41:25.339944  956726 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1001 20:41:25.340131  956726 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1001 20:41:25.901931  956726 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1001 20:41:26.250281  956726 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1001 20:41:26.916481  956726 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1001 20:41:23.866028  945840 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:41:23.881517  945840 api_server.go:72] duration metric: took 5m57.675060864s to wait for apiserver process to appear ...
	I1001 20:41:23.881538  945840 api_server.go:88] waiting for apiserver healthz status ...
	I1001 20:41:23.881574  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1001 20:41:23.881630  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1001 20:41:23.934034  945840 cri.go:89] found id: "52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2"
	I1001 20:41:23.934053  945840 cri.go:89] found id: "5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff"
	I1001 20:41:23.934058  945840 cri.go:89] found id: ""
	I1001 20:41:23.934065  945840 logs.go:276] 2 containers: [52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2 5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff]
	I1001 20:41:23.934131  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:23.938366  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:23.942113  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1001 20:41:23.942178  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1001 20:41:24.002772  945840 cri.go:89] found id: "2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15"
	I1001 20:41:24.002793  945840 cri.go:89] found id: "1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a"
	I1001 20:41:24.002799  945840 cri.go:89] found id: ""
	I1001 20:41:24.002806  945840 logs.go:276] 2 containers: [2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15 1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a]
	I1001 20:41:24.002864  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.007635  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.011475  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1001 20:41:24.011546  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1001 20:41:24.062881  945840 cri.go:89] found id: "9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648"
	I1001 20:41:24.062961  945840 cri.go:89] found id: "79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763"
	I1001 20:41:24.062983  945840 cri.go:89] found id: ""
	I1001 20:41:24.063006  945840 logs.go:276] 2 containers: [9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648 79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763]
	I1001 20:41:24.063110  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.067221  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.070908  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1001 20:41:24.071040  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1001 20:41:24.122643  945840 cri.go:89] found id: "4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6"
	I1001 20:41:24.122664  945840 cri.go:89] found id: "7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251"
	I1001 20:41:24.122670  945840 cri.go:89] found id: ""
	I1001 20:41:24.122678  945840 logs.go:276] 2 containers: [4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6 7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251]
	I1001 20:41:24.122739  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.126795  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.130802  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1001 20:41:24.130927  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1001 20:41:24.181796  945840 cri.go:89] found id: "0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c"
	I1001 20:41:24.181870  945840 cri.go:89] found id: "d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b"
	I1001 20:41:24.181891  945840 cri.go:89] found id: ""
	I1001 20:41:24.181913  945840 logs.go:276] 2 containers: [0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b]
	I1001 20:41:24.182000  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.186431  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.190291  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1001 20:41:24.190413  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1001 20:41:24.239513  945840 cri.go:89] found id: "328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6"
	I1001 20:41:24.239585  945840 cri.go:89] found id: "080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7"
	I1001 20:41:24.239604  945840 cri.go:89] found id: ""
	I1001 20:41:24.239626  945840 logs.go:276] 2 containers: [328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6 080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7]
	I1001 20:41:24.239711  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.244080  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.247753  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1001 20:41:24.247869  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1001 20:41:24.299189  945840 cri.go:89] found id: "c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c"
	I1001 20:41:24.299257  945840 cri.go:89] found id: "f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf"
	I1001 20:41:24.299279  945840 cri.go:89] found id: ""
	I1001 20:41:24.299300  945840 logs.go:276] 2 containers: [c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf]
	I1001 20:41:24.299384  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.303343  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.307243  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1001 20:41:24.307309  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1001 20:41:24.368940  945840 cri.go:89] found id: "25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10"
	I1001 20:41:24.368959  945840 cri.go:89] found id: "24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3"
	I1001 20:41:24.368964  945840 cri.go:89] found id: ""
	I1001 20:41:24.368971  945840 logs.go:276] 2 containers: [25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10 24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3]
	I1001 20:41:24.369025  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.373383  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.383396  945840 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1001 20:41:24.383465  945840 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1001 20:41:24.460516  945840 cri.go:89] found id: "9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3"
	I1001 20:41:24.460535  945840 cri.go:89] found id: ""
	I1001 20:41:24.460543  945840 logs.go:276] 1 containers: [9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3]
	I1001 20:41:24.460602  945840 ssh_runner.go:195] Run: which crictl
	I1001 20:41:24.465930  945840 logs.go:123] Gathering logs for coredns [9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648] ...
	I1001 20:41:24.466006  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648"
	I1001 20:41:24.533082  945840 logs.go:123] Gathering logs for coredns [79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763] ...
	I1001 20:41:24.533175  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763"
	I1001 20:41:24.608751  945840 logs.go:123] Gathering logs for containerd ...
	I1001 20:41:24.608781  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1001 20:41:24.686077  945840 logs.go:123] Gathering logs for container status ...
	I1001 20:41:24.686117  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1001 20:41:24.749063  945840 logs.go:123] Gathering logs for kubernetes-dashboard [9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3] ...
	I1001 20:41:24.749093  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3"
	I1001 20:41:24.813183  945840 logs.go:123] Gathering logs for kubelet ...
	I1001 20:41:24.813211  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1001 20:41:24.874850  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.979627     664 reflector.go:138] object-"kube-system"/"kube-proxy-token-bktfz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-bktfz" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.875094  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.979931     664 reflector.go:138] object-"kube-system"/"storage-provisioner-token-hcc5p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-hcc5p" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.875690  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.980146     664 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.875929  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.983725     664 reflector.go:138] object-"kube-system"/"coredns-token-q5kkb": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-q5kkb" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.876172  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.983973     664 reflector.go:138] object-"kube-system"/"metrics-server-token-qwlv6": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-qwlv6" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.876545  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.984045     664 reflector.go:138] object-"default"/"default-token-h86wg": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-h86wg" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.876777  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.984119     664 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.877019  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:45 old-k8s-version-992970 kubelet[664]: E1001 20:35:45.984187     664 reflector.go:138] object-"kube-system"/"kindnet-token-pv9f8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-pv9f8" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.885469  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:48 old-k8s-version-992970 kubelet[664]: E1001 20:35:48.311000     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:24.886649  945840 logs.go:138] Found kubelet problem: Oct 01 20:35:48 old-k8s-version-992970 kubelet[664]: E1001 20:35:48.692405     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.889553  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:02 old-k8s-version-992970 kubelet[664]: E1001 20:36:02.354833     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:24.889958  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:03 old-k8s-version-992970 kubelet[664]: E1001 20:36:03.679908     664 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-npz8p": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-npz8p" is forbidden: User "system:node:old-k8s-version-992970" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-992970' and this object
	W1001 20:41:24.893883  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:11 old-k8s-version-992970 kubelet[664]: E1001 20:36:11.784869     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.894223  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:12 old-k8s-version-992970 kubelet[664]: E1001 20:36:12.783230     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.894958  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:16 old-k8s-version-992970 kubelet[664]: E1001 20:36:16.435246     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.895145  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:17 old-k8s-version-992970 kubelet[664]: E1001 20:36:17.345729     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.895578  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:19 old-k8s-version-992970 kubelet[664]: E1001 20:36:19.814778     664 pod_workers.go:191] Error syncing pod 71d7d681-3057-4e08-8ce0-dd68e87dfd26 ("storage-provisioner_kube-system(71d7d681-3057-4e08-8ce0-dd68e87dfd26)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(71d7d681-3057-4e08-8ce0-dd68e87dfd26)"
	W1001 20:41:24.896576  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:29 old-k8s-version-992970 kubelet[664]: E1001 20:36:29.844345     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.899079  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:32 old-k8s-version-992970 kubelet[664]: E1001 20:36:32.373391     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:24.899584  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:36 old-k8s-version-992970 kubelet[664]: E1001 20:36:36.434745     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.899808  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:47 old-k8s-version-992970 kubelet[664]: E1001 20:36:47.345072     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.900159  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:49 old-k8s-version-992970 kubelet[664]: E1001 20:36:49.344533     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.900364  945840 logs.go:138] Found kubelet problem: Oct 01 20:36:59 old-k8s-version-992970 kubelet[664]: E1001 20:36:59.345227     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.901014  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:01 old-k8s-version-992970 kubelet[664]: E1001 20:37:01.943400     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.901364  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:06 old-k8s-version-992970 kubelet[664]: E1001 20:37:06.435341     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.903814  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:13 old-k8s-version-992970 kubelet[664]: E1001 20:37:13.354154     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:24.904170  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:17 old-k8s-version-992970 kubelet[664]: E1001 20:37:17.345057     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.904544  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:28 old-k8s-version-992970 kubelet[664]: E1001 20:37:28.349670     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.904766  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:28 old-k8s-version-992970 kubelet[664]: E1001 20:37:28.364204     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.905377  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:43 old-k8s-version-992970 kubelet[664]: E1001 20:37:43.051579     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.905588  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:43 old-k8s-version-992970 kubelet[664]: E1001 20:37:43.345097     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.905990  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:46 old-k8s-version-992970 kubelet[664]: E1001 20:37:46.435452     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.906195  945840 logs.go:138] Found kubelet problem: Oct 01 20:37:58 old-k8s-version-992970 kubelet[664]: E1001 20:37:58.344936     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.906542  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:00 old-k8s-version-992970 kubelet[664]: E1001 20:38:00.344778     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.906893  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:13 old-k8s-version-992970 kubelet[664]: E1001 20:38:13.344526     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.907103  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:13 old-k8s-version-992970 kubelet[664]: E1001 20:38:13.345426     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.908033  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:25 old-k8s-version-992970 kubelet[664]: E1001 20:38:25.345458     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.908408  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:26 old-k8s-version-992970 kubelet[664]: E1001 20:38:26.344683     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.912918  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:36 old-k8s-version-992970 kubelet[664]: E1001 20:38:36.353928     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W1001 20:41:24.913282  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:38 old-k8s-version-992970 kubelet[664]: E1001 20:38:38.345052     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.913655  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:50 old-k8s-version-992970 kubelet[664]: E1001 20:38:50.345092     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.913842  945840 logs.go:138] Found kubelet problem: Oct 01 20:38:51 old-k8s-version-992970 kubelet[664]: E1001 20:38:51.344896     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.914173  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:01 old-k8s-version-992970 kubelet[664]: E1001 20:39:01.344547     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.914353  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:02 old-k8s-version-992970 kubelet[664]: E1001 20:39:02.345834     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.914534  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:13 old-k8s-version-992970 kubelet[664]: E1001 20:39:13.344915     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.915125  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:15 old-k8s-version-992970 kubelet[664]: E1001 20:39:15.277864     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.915447  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:16 old-k8s-version-992970 kubelet[664]: E1001 20:39:16.434671     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.915665  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:26 old-k8s-version-992970 kubelet[664]: E1001 20:39:26.350298     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.916033  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:31 old-k8s-version-992970 kubelet[664]: E1001 20:39:31.344495     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.916415  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:38 old-k8s-version-992970 kubelet[664]: E1001 20:39:38.344941     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.916812  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:43 old-k8s-version-992970 kubelet[664]: E1001 20:39:43.344445     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.917030  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:49 old-k8s-version-992970 kubelet[664]: E1001 20:39:49.344870     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.917374  945840 logs.go:138] Found kubelet problem: Oct 01 20:39:54 old-k8s-version-992970 kubelet[664]: E1001 20:39:54.344924     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.917578  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:00 old-k8s-version-992970 kubelet[664]: E1001 20:40:00.348991     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.917925  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:07 old-k8s-version-992970 kubelet[664]: E1001 20:40:07.344581     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.918129  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:11 old-k8s-version-992970 kubelet[664]: E1001 20:40:11.347785     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.918475  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:21 old-k8s-version-992970 kubelet[664]: E1001 20:40:21.344561     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.918682  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:26 old-k8s-version-992970 kubelet[664]: E1001 20:40:26.345878     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.919028  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:32 old-k8s-version-992970 kubelet[664]: E1001 20:40:32.344565     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.919228  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:37 old-k8s-version-992970 kubelet[664]: E1001 20:40:37.344837     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.919578  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:47 old-k8s-version-992970 kubelet[664]: E1001 20:40:47.345723     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.919778  945840 logs.go:138] Found kubelet problem: Oct 01 20:40:49 old-k8s-version-992970 kubelet[664]: E1001 20:40:49.344983     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.920120  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:00 old-k8s-version-992970 kubelet[664]: E1001 20:41:00.344524     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.920347  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:03 old-k8s-version-992970 kubelet[664]: E1001 20:41:03.344940     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:24.920738  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:11 old-k8s-version-992970 kubelet[664]: E1001 20:41:11.344510     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:24.920948  945840 logs.go:138] Found kubelet problem: Oct 01 20:41:14 old-k8s-version-992970 kubelet[664]: E1001 20:41:14.350030     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1001 20:41:24.920978  945840 logs.go:123] Gathering logs for dmesg ...
	I1001 20:41:24.921008  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1001 20:41:24.943590  945840 logs.go:123] Gathering logs for etcd [1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a] ...
	I1001 20:41:24.943618  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a"
	I1001 20:41:25.014814  945840 logs.go:123] Gathering logs for kube-scheduler [7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251] ...
	I1001 20:41:25.014982  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251"
	I1001 20:41:25.079403  945840 logs.go:123] Gathering logs for kube-controller-manager [328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6] ...
	I1001 20:41:25.079592  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6"
	I1001 20:41:25.156269  945840 logs.go:123] Gathering logs for kube-controller-manager [080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7] ...
	I1001 20:41:25.156352  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7"
	I1001 20:41:25.245466  945840 logs.go:123] Gathering logs for kube-apiserver [5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff] ...
	I1001 20:41:25.245543  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff"
	I1001 20:41:25.311121  945840 logs.go:123] Gathering logs for etcd [2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15] ...
	I1001 20:41:25.311160  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15"
	I1001 20:41:25.374733  945840 logs.go:123] Gathering logs for kube-scheduler [4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6] ...
	I1001 20:41:25.374871  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6"
	I1001 20:41:25.427545  945840 logs.go:123] Gathering logs for kindnet [c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c] ...
	I1001 20:41:25.427570  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c"
	I1001 20:41:25.498817  945840 logs.go:123] Gathering logs for kindnet [f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf] ...
	I1001 20:41:25.498895  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf"
	I1001 20:41:25.566874  945840 logs.go:123] Gathering logs for storage-provisioner [25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10] ...
	I1001 20:41:25.566948  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10"
	I1001 20:41:25.627907  945840 logs.go:123] Gathering logs for describe nodes ...
	I1001 20:41:25.627981  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1001 20:41:25.800760  945840 logs.go:123] Gathering logs for kube-apiserver [52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2] ...
	I1001 20:41:25.800834  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2"
	I1001 20:41:25.880615  945840 logs.go:123] Gathering logs for kube-proxy [0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c] ...
	I1001 20:41:25.880769  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c"
	I1001 20:41:25.937302  945840 logs.go:123] Gathering logs for kube-proxy [d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b] ...
	I1001 20:41:25.937384  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b"
	I1001 20:41:25.999488  945840 logs.go:123] Gathering logs for storage-provisioner [24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3] ...
	I1001 20:41:25.999576  945840 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3"
	I1001 20:41:26.047120  945840 out.go:358] Setting ErrFile to fd 2...
	I1001 20:41:26.047192  945840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1001 20:41:26.047285  945840 out.go:270] X Problems detected in kubelet:
	W1001 20:41:26.047447  945840 out.go:270]   Oct 01 20:40:49 old-k8s-version-992970 kubelet[664]: E1001 20:40:49.344983     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:26.047491  945840 out.go:270]   Oct 01 20:41:00 old-k8s-version-992970 kubelet[664]: E1001 20:41:00.344524     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:26.047589  945840 out.go:270]   Oct 01 20:41:03 old-k8s-version-992970 kubelet[664]: E1001 20:41:03.344940     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1001 20:41:26.047649  945840 out.go:270]   Oct 01 20:41:11 old-k8s-version-992970 kubelet[664]: E1001 20:41:11.344510     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	W1001 20:41:26.047734  945840 out.go:270]   Oct 01 20:41:14 old-k8s-version-992970 kubelet[664]: E1001 20:41:14.350030     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1001 20:41:26.047798  945840 out.go:358] Setting ErrFile to fd 2...
	I1001 20:41:26.047823  945840 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:41:27.637005  956726 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1001 20:41:27.945287  956726 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1001 20:41:27.945578  956726 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-734252 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1001 20:41:28.682451  956726 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1001 20:41:28.682756  956726 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-734252 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1001 20:41:28.973436  956726 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1001 20:41:29.146107  956726 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1001 20:41:29.658363  956726 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1001 20:41:29.658623  956726 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1001 20:41:30.017298  956726 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1001 20:41:30.664638  956726 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1001 20:41:31.387734  956726 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1001 20:41:31.660068  956726 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1001 20:41:32.272619  956726 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1001 20:41:32.273342  956726 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1001 20:41:32.276247  956726 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1001 20:41:32.278856  956726 out.go:235]   - Booting up control plane ...
	I1001 20:41:32.278970  956726 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1001 20:41:32.279050  956726 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1001 20:41:32.281539  956726 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1001 20:41:32.292357  956726 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1001 20:41:32.298741  956726 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1001 20:41:32.298801  956726 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1001 20:41:32.402797  956726 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1001 20:41:32.402923  956726 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1001 20:41:36.048948  945840 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1001 20:41:36.059936  945840 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1001 20:41:36.062399  945840 out.go:201] 
	W1001 20:41:36.064574  945840 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1001 20:41:36.064614  945840 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1001 20:41:36.064637  945840 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1001 20:41:36.064642  945840 out.go:270] * 
	W1001 20:41:36.065702  945840 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1001 20:41:36.067858  945840 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	cfe4a382ca32f       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   b76e985f3fc61       dashboard-metrics-scraper-8d5bb5db8-fnpzc
	25fc4171ab662       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   e6878542eacb6       storage-provisioner
	9d4566b6c6262       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   dbb603cb1e830       kubernetes-dashboard-cd95d586-sbvn5
	0f30aa0154882       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   4d1dab9b7aae1       kube-proxy-qmc4m
	c4c7afea0c881       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   cb0891c638351       kindnet-gtf5m
	24184e4a5873f       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   e6878542eacb6       storage-provisioner
	9f920837e1307       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   6304a9e2c2a2d       coredns-74ff55c5b-tssxl
	ac305f5c411c7       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   837f5ab784273       busybox
	4404705769419       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   a175bfec8d3d9       kube-scheduler-old-k8s-version-992970
	328abb5bffaac       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   f54102788cf02       kube-controller-manager-old-k8s-version-992970
	2e5b43454546b       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   3860e1d2001de       etcd-old-k8s-version-992970
	52fd1e3edaecc       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   e947da597ee92       kube-apiserver-old-k8s-version-992970
	40d7d19954f83       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   f741058aeb620       busybox
	79338d59b69f5       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   c39049b55be3d       coredns-74ff55c5b-tssxl
	f7713ba62da8c       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   e93b90b2e1388       kindnet-gtf5m
	d60227509a133       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   a0b58bddc1b81       kube-proxy-qmc4m
	7259a19bbbd05       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   4180d0490fb96       kube-scheduler-old-k8s-version-992970
	5fc6df4f53b34       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   c6d52714a59a6       kube-apiserver-old-k8s-version-992970
	080b5cf1e691e       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   13402c16f3d97       kube-controller-manager-old-k8s-version-992970
	1d03e505bc8f2       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   006995f97576b       etcd-old-k8s-version-992970
	
	
	==> containerd <==
	Oct 01 20:37:42 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:37:42.370809654Z" level=info msg="StartContainer for \"ab500b7faec8cf2b4b04d47094b942d328709bd2423c4a7b21f8f48541548655\""
	Oct 01 20:37:42 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:37:42.448065557Z" level=info msg="StartContainer for \"ab500b7faec8cf2b4b04d47094b942d328709bd2423c4a7b21f8f48541548655\" returns successfully"
	Oct 01 20:37:42 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:37:42.477256802Z" level=info msg="shim disconnected" id=ab500b7faec8cf2b4b04d47094b942d328709bd2423c4a7b21f8f48541548655 namespace=k8s.io
	Oct 01 20:37:42 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:37:42.477400906Z" level=warning msg="cleaning up after shim disconnected" id=ab500b7faec8cf2b4b04d47094b942d328709bd2423c4a7b21f8f48541548655 namespace=k8s.io
	Oct 01 20:37:42 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:37:42.477411908Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 01 20:37:43 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:37:43.059852461Z" level=info msg="RemoveContainer for \"26bc7f46cea2357963cca53f0b3acba8d4b4142648b5f3a30a378ac956e70858\""
	Oct 01 20:37:43 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:37:43.066851180Z" level=info msg="RemoveContainer for \"26bc7f46cea2357963cca53f0b3acba8d4b4142648b5f3a30a378ac956e70858\" returns successfully"
	Oct 01 20:38:36 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:38:36.345855817Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 01 20:38:36 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:38:36.351480339Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 01 20:38:36 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:38:36.353531108Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 01 20:38:36 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:38:36.353610523Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 01 20:39:14 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:39:14.349915877Z" level=info msg="CreateContainer within sandbox \"b76e985f3fc616338234eb7d16a75f3a9718a67f429d972ec6ec097e362bb3bc\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Oct 01 20:39:14 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:39:14.364773513Z" level=info msg="CreateContainer within sandbox \"b76e985f3fc616338234eb7d16a75f3a9718a67f429d972ec6ec097e362bb3bc\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"cfe4a382ca32fb2a55c4b4f6f17de55427de4108150172d0c27b7e80549c7410\""
	Oct 01 20:39:14 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:39:14.365556856Z" level=info msg="StartContainer for \"cfe4a382ca32fb2a55c4b4f6f17de55427de4108150172d0c27b7e80549c7410\""
	Oct 01 20:39:14 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:39:14.441223738Z" level=info msg="StartContainer for \"cfe4a382ca32fb2a55c4b4f6f17de55427de4108150172d0c27b7e80549c7410\" returns successfully"
	Oct 01 20:39:14 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:39:14.468043132Z" level=info msg="shim disconnected" id=cfe4a382ca32fb2a55c4b4f6f17de55427de4108150172d0c27b7e80549c7410 namespace=k8s.io
	Oct 01 20:39:14 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:39:14.468102241Z" level=warning msg="cleaning up after shim disconnected" id=cfe4a382ca32fb2a55c4b4f6f17de55427de4108150172d0c27b7e80549c7410 namespace=k8s.io
	Oct 01 20:39:14 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:39:14.468112267Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 01 20:39:14 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:39:14.479245556Z" level=warning msg="cleanup warnings time=\"2024-10-01T20:39:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
	Oct 01 20:39:15 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:39:15.279659170Z" level=info msg="RemoveContainer for \"ab500b7faec8cf2b4b04d47094b942d328709bd2423c4a7b21f8f48541548655\""
	Oct 01 20:39:15 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:39:15.288578896Z" level=info msg="RemoveContainer for \"ab500b7faec8cf2b4b04d47094b942d328709bd2423c4a7b21f8f48541548655\" returns successfully"
	Oct 01 20:41:25 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:41:25.347509734Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 01 20:41:25 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:41:25.364873588Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Oct 01 20:41:25 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:41:25.366661323Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 01 20:41:25 old-k8s-version-992970 containerd[569]: time="2024-10-01T20:41:25.366718487Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [79338d59b69f55cfa1a600b5cd37ff7c39a17dcc3324d5ba6b51d7222a2df763] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:49809 - 61096 "HINFO IN 7521298883262512285.7997996436878363922. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004507663s
	
	
	==> coredns [9f920837e1307759503a66e2f51b2eb0411bccf80d2475971a50b2d9a7fe0648] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:48817 - 9866 "HINFO IN 6817670228148696217.835533269175454090. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012430923s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I1001 20:36:18.685967       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-01 20:35:48.684931842 +0000 UTC m=+0.035656229) (total time: 30.000934473s):
	Trace[2019727887]: [30.000934473s] [30.000934473s] END
	E1001 20:36:18.686000       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1001 20:36:18.685970       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-01 20:35:48.684040137 +0000 UTC m=+0.034764524) (total time: 30.001904314s):
	Trace[939984059]: [30.001904314s] [30.001904314s] END
	E1001 20:36:18.686017       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1001 20:36:18.685988       1 trace.go:116] Trace[1427131847]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-01 20:35:48.685615773 +0000 UTC m=+0.036340168) (total time: 30.000271441s):
	Trace[1427131847]: [30.000271441s] [30.000271441s] END
	E1001 20:36:18.686023       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-992970
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-992970
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=028fa3fa4ead204345663a497a11836d2b7758c4
	                    minikube.k8s.io/name=old-k8s-version-992970
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_01T20_33_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 01 Oct 2024 20:33:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-992970
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 01 Oct 2024 20:41:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 01 Oct 2024 20:41:38 +0000   Tue, 01 Oct 2024 20:32:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 01 Oct 2024 20:41:38 +0000   Tue, 01 Oct 2024 20:32:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 01 Oct 2024 20:41:38 +0000   Tue, 01 Oct 2024 20:32:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 01 Oct 2024 20:41:38 +0000   Tue, 01 Oct 2024 20:33:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-992970
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 30c3a4c454a544e9b83f6daa8e2dd6f9
	  System UUID:                509d5b47-1fb6-4a4d-9d0d-a5d5d60d4470
	  Boot ID:                    3aa8f718-8507-41e8-80ca-0eb33f6ce70e
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m47s
	  kube-system                 coredns-74ff55c5b-tssxl                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m19s
	  kube-system                 etcd-old-k8s-version-992970                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m25s
	  kube-system                 kindnet-gtf5m                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m18s
	  kube-system                 kube-apiserver-old-k8s-version-992970             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-controller-manager-old-k8s-version-992970    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-proxy-qmc4m                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m18s
	  kube-system                 kube-scheduler-old-k8s-version-992970             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 metrics-server-9975d5f86-g89nw                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m35s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-fnpzc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-sbvn5               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m45s (x5 over 8m45s)  kubelet     Node old-k8s-version-992970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m45s (x5 over 8m45s)  kubelet     Node old-k8s-version-992970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m45s (x4 over 8m45s)  kubelet     Node old-k8s-version-992970 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m25s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m25s                  kubelet     Node old-k8s-version-992970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m25s                  kubelet     Node old-k8s-version-992970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m25s                  kubelet     Node old-k8s-version-992970 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m25s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m18s                  kubelet     Node old-k8s-version-992970 status is now: NodeReady
	  Normal  Starting                 8m17s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m4s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m4s (x8 over 6m4s)    kubelet     Node old-k8s-version-992970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s (x7 over 6m4s)    kubelet     Node old-k8s-version-992970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x8 over 6m4s)    kubelet     Node old-k8s-version-992970 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m4s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m49s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [1d03e505bc8f2936719d97de787d31429005f479c007cbfbc2bd28f02e34e82a] <==
	raft2024/10/01 20:32:54 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/10/01 20:32:54 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/10/01 20:32:54 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/10/01 20:32:54 INFO: ea7e25599daad906 became leader at term 2
	raft2024/10/01 20:32:54 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-10-01 20:32:54.879610 I | etcdserver: setting up the initial cluster version to 3.4
	2024-10-01 20:32:54.884730 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-10-01 20:32:54.885138 I | etcdserver/api: enabled capabilities for version 3.4
	2024-10-01 20:32:54.885242 I | etcdserver: published {Name:old-k8s-version-992970 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-10-01 20:32:54.885315 I | embed: ready to serve client requests
	2024-10-01 20:32:54.886977 I | embed: serving client requests on 192.168.76.2:2379
	2024-10-01 20:32:54.887103 I | embed: ready to serve client requests
	2024-10-01 20:32:54.901132 I | embed: serving client requests on 127.0.0.1:2379
	2024-10-01 20:33:19.075860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:33:21.220574 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:33:31.220643 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:33:41.220671 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:33:51.220530 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:34:01.220680 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:34:11.220520 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:34:21.220536 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:34:31.220440 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:34:41.220743 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:34:51.220869 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:35:01.237852 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [2e5b43454546b74d5cbd335e998bc3ac9c412fb49018fdb2102b7edf5913fd15] <==
	2024-10-01 20:37:33.873992 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:37:43.873999 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:37:53.873913 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:38:03.874043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:38:13.874235 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:38:23.873817 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:38:33.873892 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:38:43.873871 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:38:53.873881 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:39:03.873994 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:39:13.873978 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:39:23.873905 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:39:33.873844 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:39:43.873813 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:39:53.873925 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:40:03.873981 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:40:13.873844 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:40:23.873931 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:40:33.873852 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:40:43.873827 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:40:53.873860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:41:03.873793 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:41:13.874809 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:41:23.874340 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-01 20:41:33.873979 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 20:41:38 up  4:24,  0 users,  load average: 1.15, 1.82, 2.44
	Linux old-k8s-version-992970 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [c4c7afea0c881ad778169fc932cb577fac15803ab1b1b8105c9409e77472d87c] <==
	I1001 20:39:29.732605       1 main.go:299] handling current node
	I1001 20:39:39.734118       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:39:39.734210       1 main.go:299] handling current node
	I1001 20:39:49.726403       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:39:49.726461       1 main.go:299] handling current node
	I1001 20:39:59.732872       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:39:59.732907       1 main.go:299] handling current node
	I1001 20:40:09.727290       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:40:09.727386       1 main.go:299] handling current node
	I1001 20:40:19.733078       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:40:19.733115       1 main.go:299] handling current node
	I1001 20:40:29.725395       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:40:29.725441       1 main.go:299] handling current node
	I1001 20:40:39.727596       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:40:39.727635       1 main.go:299] handling current node
	I1001 20:40:49.726144       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:40:49.726369       1 main.go:299] handling current node
	I1001 20:40:59.731819       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:40:59.731852       1 main.go:299] handling current node
	I1001 20:41:09.733960       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:41:09.734006       1 main.go:299] handling current node
	I1001 20:41:19.726026       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:41:19.726081       1 main.go:299] handling current node
	I1001 20:41:29.732506       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:41:29.732540       1 main.go:299] handling current node
	
	
	==> kindnet [f7713ba62da8c36dfceef4ece1df112d881cc0fcc4335fd1bed2254c213c46cf] <==
	I1001 20:33:23.227227       1 controller.go:338] Waiting for informer caches to sync
	I1001 20:33:23.227233       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1001 20:33:23.427679       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1001 20:33:23.427709       1 metrics.go:61] Registering metrics
	I1001 20:33:23.427778       1 controller.go:374] Syncing nftables rules
	I1001 20:33:33.232520       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:33:33.232559       1 main.go:299] handling current node
	I1001 20:33:43.225750       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:33:43.225783       1 main.go:299] handling current node
	I1001 20:33:53.233614       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:33:53.233650       1 main.go:299] handling current node
	I1001 20:34:03.232527       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:34:03.232559       1 main.go:299] handling current node
	I1001 20:34:13.228560       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:34:13.228671       1 main.go:299] handling current node
	I1001 20:34:23.225521       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:34:23.225577       1 main.go:299] handling current node
	I1001 20:34:33.232980       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:34:33.233071       1 main.go:299] handling current node
	I1001 20:34:43.226706       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:34:43.226745       1 main.go:299] handling current node
	I1001 20:34:53.225614       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:34:53.225654       1 main.go:299] handling current node
	I1001 20:35:03.232507       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I1001 20:35:03.232543       1 main.go:299] handling current node
	
	
	==> kube-apiserver [52fd1e3edaecc71c42208106e35c2a84841081ef4416d14421dd250c6f2f3bc2] <==
	I1001 20:38:11.984435       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1001 20:38:11.984444       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1001 20:38:43.962998       1 client.go:360] parsed scheme: "passthrough"
	I1001 20:38:43.963039       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1001 20:38:43.963048       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1001 20:38:48.594920       1 handler_proxy.go:102] no RequestInfo found in the context
	E1001 20:38:48.594994       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1001 20:38:48.595011       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:39:23.566547       1 client.go:360] parsed scheme: "passthrough"
	I1001 20:39:23.566601       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1001 20:39:23.566611       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1001 20:39:55.198951       1 client.go:360] parsed scheme: "passthrough"
	I1001 20:39:55.198996       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1001 20:39:55.199006       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1001 20:40:32.108198       1 client.go:360] parsed scheme: "passthrough"
	I1001 20:40:32.108245       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1001 20:40:32.108256       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1001 20:40:46.899352       1 handler_proxy.go:102] no RequestInfo found in the context
	E1001 20:40:46.899448       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1001 20:40:46.899467       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1001 20:41:11.105041       1 client.go:360] parsed scheme: "passthrough"
	I1001 20:41:11.105095       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1001 20:41:11.105104       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [5fc6df4f53b344aa9f94e10632010c73d819154b6125ba88992b40ee32775aff] <==
	I1001 20:33:02.249337       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1001 20:33:02.249476       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1001 20:33:02.262212       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1001 20:33:02.266252       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1001 20:33:02.266275       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1001 20:33:02.739369       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1001 20:33:02.784600       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1001 20:33:02.901904       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1001 20:33:02.903080       1 controller.go:606] quota admission added evaluator for: endpoints
	I1001 20:33:02.910543       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1001 20:33:03.297926       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1001 20:33:03.876537       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1001 20:33:04.646345       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1001 20:33:04.737513       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1001 20:33:19.886808       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1001 20:33:20.083048       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1001 20:33:33.365176       1 client.go:360] parsed scheme: "passthrough"
	I1001 20:33:33.365220       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1001 20:33:33.365229       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1001 20:34:05.444703       1 client.go:360] parsed scheme: "passthrough"
	I1001 20:34:05.444746       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1001 20:34:05.444899       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1001 20:34:48.708708       1 client.go:360] parsed scheme: "passthrough"
	I1001 20:34:48.708749       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1001 20:34:48.708757       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [080b5cf1e691e4162dd5e7dd0528de4b40a61947c03f91d7f15fd907b32e3fb7] <==
	W1001 20:33:20.006859       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-992970. Assuming now as a timestamp.
	I1001 20:33:20.006899       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I1001 20:33:20.027479       1 event.go:291] "Event occurred" object="kube-dns" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kube-system/kube-dns: endpoints \"kube-dns\" already exists"
	I1001 20:33:20.042484       1 shared_informer.go:247] Caches are synced for resource quota 
	I1001 20:33:20.053780       1 shared_informer.go:247] Caches are synced for resource quota 
	E1001 20:33:20.064905       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	E1001 20:33:20.065423       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1001 20:33:20.076305       1 shared_informer.go:247] Caches are synced for daemon sets 
	I1001 20:33:20.104087       1 shared_informer.go:247] Caches are synced for stateful set 
	E1001 20:33:20.108620       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1001 20:33:20.123088       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gtf5m"
	I1001 20:33:20.123290       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-qmc4m"
	E1001 20:33:20.195888       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"a8dd87fb-3dc5-455a-8c22-da5d63550ade", ResourceVersion:"422", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63863411584, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001b39da0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001b39dc0)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001b39de0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001b39e00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001b39e20), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001965100), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b39e40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001b39e60), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001b39ea0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40019692c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001c98a58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000604070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40016cb3e8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001c98aa8)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I1001 20:33:20.228314       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I1001 20:33:20.503654       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1001 20:33:20.503685       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1001 20:33:20.528674       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1001 20:33:21.586045       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1001 20:33:21.603011       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-qhht5"
	I1001 20:33:25.007101       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1001 20:35:02.479284       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E1001 20:35:02.739195       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E1001 20:35:02.739915       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E1001 20:35:02.786855       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1001 20:35:03.617870       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-9975d5f86-g89nw"
	
	
	==> kube-controller-manager [328abb5bffaac1a22bfd5f584cdc00199eb434817c90cc9cd9e143e84ce327e6] <==
	E1001 20:37:35.042858       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1001 20:37:40.646574       1 request.go:655] Throttling request took 1.048522022s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
	W1001 20:37:41.497955       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1001 20:38:05.548161       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1001 20:38:13.148517       1 request.go:655] Throttling request took 1.046723667s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1?timeout=32s
	W1001 20:38:13.999883       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1001 20:38:36.050188       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1001 20:38:45.650429       1 request.go:655] Throttling request took 1.048334735s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1001 20:38:46.501905       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1001 20:39:06.556712       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1001 20:39:18.152385       1 request.go:655] Throttling request took 1.048366604s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1001 20:39:19.003863       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1001 20:39:37.058721       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1001 20:39:50.654436       1 request.go:655] Throttling request took 1.048345518s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W1001 20:39:51.505972       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1001 20:40:07.560408       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1001 20:40:23.156471       1 request.go:655] Throttling request took 1.048113423s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1001 20:40:24.007897       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1001 20:40:38.062557       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1001 20:40:55.658378       1 request.go:655] Throttling request took 1.048150506s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1001 20:40:56.509988       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1001 20:41:08.570349       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1001 20:41:28.160342       1 request.go:655] Throttling request took 1.048204441s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W1001 20:41:29.012055       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1001 20:41:39.074472       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [0f30aa015488209d6fbe9eca4ce2c68a14a169edb1066c1cc567c52154355d9c] <==
	I1001 20:35:49.928799       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1001 20:35:49.928907       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1001 20:35:49.958655       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1001 20:35:49.958749       1 server_others.go:185] Using iptables Proxier.
	I1001 20:35:49.959052       1 server.go:650] Version: v1.20.0
	I1001 20:35:49.959932       1 config.go:315] Starting service config controller
	I1001 20:35:49.960041       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1001 20:35:49.960082       1 config.go:224] Starting endpoint slice config controller
	I1001 20:35:49.965031       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1001 20:35:50.060257       1 shared_informer.go:247] Caches are synced for service config 
	I1001 20:35:50.065463       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [d60227509a1336d0d229da0302aa4bf1e9b4769966d58ff8b5e039083c1f0a7b] <==
	I1001 20:33:20.978979       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I1001 20:33:20.979078       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W1001 20:33:21.132749       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1001 20:33:21.132895       1 server_others.go:185] Using iptables Proxier.
	I1001 20:33:21.133311       1 server.go:650] Version: v1.20.0
	I1001 20:33:21.141612       1 config.go:315] Starting service config controller
	I1001 20:33:21.141623       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1001 20:33:21.141638       1 config.go:224] Starting endpoint slice config controller
	I1001 20:33:21.141642       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1001 20:33:21.242272       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I1001 20:33:21.242334       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [4404705769419b51fd2e0516354becb74724cc1d60ee54f13b71d91fbcc555c6] <==
	I1001 20:35:39.603610       1 serving.go:331] Generated self-signed cert in-memory
	W1001 20:35:45.958337       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 20:35:45.958563       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 20:35:45.958577       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 20:35:45.958582       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 20:35:46.198829       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1001 20:35:46.199296       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 20:35:46.199499       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 20:35:46.199582       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1001 20:35:46.301213       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [7259a19bbbd05708ca8c1a1c0d6bd098888b8132c3c59dba923d8dfebcd4f251] <==
	I1001 20:32:56.293114       1 serving.go:331] Generated self-signed cert in-memory
	W1001 20:33:01.402796       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1001 20:33:01.402829       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1001 20:33:01.402842       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1001 20:33:01.402847       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1001 20:33:01.466664       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1001 20:33:01.468188       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 20:33:01.468362       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1001 20:33:01.468444       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1001 20:33:01.501696       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1001 20:33:01.510140       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 20:33:01.511309       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1001 20:33:01.511408       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1001 20:33:01.511461       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1001 20:33:01.511527       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1001 20:33:01.511603       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1001 20:33:01.511664       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1001 20:33:01.511800       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1001 20:33:01.511884       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 20:33:01.511962       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1001 20:33:01.512028       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1001 20:33:02.387367       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1001 20:33:02.459340       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1001 20:33:02.567827       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I1001 20:33:04.569474       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Oct 01 20:40:11 old-k8s-version-992970 kubelet[664]: E1001 20:40:11.347785     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 01 20:40:21 old-k8s-version-992970 kubelet[664]: I1001 20:40:21.344147     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfe4a382ca32fb2a55c4b4f6f17de55427de4108150172d0c27b7e80549c7410
	Oct 01 20:40:21 old-k8s-version-992970 kubelet[664]: E1001 20:40:21.344561     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	Oct 01 20:40:26 old-k8s-version-992970 kubelet[664]: E1001 20:40:26.345878     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 01 20:40:32 old-k8s-version-992970 kubelet[664]: I1001 20:40:32.344180     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfe4a382ca32fb2a55c4b4f6f17de55427de4108150172d0c27b7e80549c7410
	Oct 01 20:40:32 old-k8s-version-992970 kubelet[664]: E1001 20:40:32.344565     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	Oct 01 20:40:37 old-k8s-version-992970 kubelet[664]: E1001 20:40:37.344837     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 01 20:40:47 old-k8s-version-992970 kubelet[664]: I1001 20:40:47.344393     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfe4a382ca32fb2a55c4b4f6f17de55427de4108150172d0c27b7e80549c7410
	Oct 01 20:40:47 old-k8s-version-992970 kubelet[664]: E1001 20:40:47.345723     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	Oct 01 20:40:49 old-k8s-version-992970 kubelet[664]: E1001 20:40:49.344983     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 01 20:41:00 old-k8s-version-992970 kubelet[664]: I1001 20:41:00.344146     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfe4a382ca32fb2a55c4b4f6f17de55427de4108150172d0c27b7e80549c7410
	Oct 01 20:41:00 old-k8s-version-992970 kubelet[664]: E1001 20:41:00.344524     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	Oct 01 20:41:03 old-k8s-version-992970 kubelet[664]: E1001 20:41:03.344940     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 01 20:41:11 old-k8s-version-992970 kubelet[664]: I1001 20:41:11.344155     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfe4a382ca32fb2a55c4b4f6f17de55427de4108150172d0c27b7e80549c7410
	Oct 01 20:41:11 old-k8s-version-992970 kubelet[664]: E1001 20:41:11.344510     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	Oct 01 20:41:14 old-k8s-version-992970 kubelet[664]: E1001 20:41:14.350030     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 01 20:41:25 old-k8s-version-992970 kubelet[664]: E1001 20:41:25.366976     664 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 01 20:41:25 old-k8s-version-992970 kubelet[664]: E1001 20:41:25.367026     664 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 01 20:41:25 old-k8s-version-992970 kubelet[664]: E1001 20:41:25.367158     664 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-qwlv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-g89nw_kube-system(fbd5b34
f-4536-4370-bb44-216e0f670b72): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Oct 01 20:41:25 old-k8s-version-992970 kubelet[664]: E1001 20:41:25.367190     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Oct 01 20:41:26 old-k8s-version-992970 kubelet[664]: I1001 20:41:26.344143     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfe4a382ca32fb2a55c4b4f6f17de55427de4108150172d0c27b7e80549c7410
	Oct 01 20:41:26 old-k8s-version-992970 kubelet[664]: E1001 20:41:26.344719     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	Oct 01 20:41:36 old-k8s-version-992970 kubelet[664]: E1001 20:41:36.423173     664 pod_workers.go:191] Error syncing pod fbd5b34f-4536-4370-bb44-216e0f670b72 ("metrics-server-9975d5f86-g89nw_kube-system(fbd5b34f-4536-4370-bb44-216e0f670b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 01 20:41:39 old-k8s-version-992970 kubelet[664]: I1001 20:41:39.344289     664 scope.go:95] [topologymanager] RemoveContainer - Container ID: cfe4a382ca32fb2a55c4b4f6f17de55427de4108150172d0c27b7e80549c7410
	Oct 01 20:41:39 old-k8s-version-992970 kubelet[664]: E1001 20:41:39.344926     664 pod_workers.go:191] Error syncing pod 355624fc-f758-41e2-abf8-8753749fede6 ("dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-fnpzc_kubernetes-dashboard(355624fc-f758-41e2-abf8-8753749fede6)"
	
	
	==> kubernetes-dashboard [9d4566b6c6262cc2d93e033c5d670a19566bbab95da9270427efcc3013f3bff3] <==
	2024/10/01 20:36:14 Using namespace: kubernetes-dashboard
	2024/10/01 20:36:14 Using in-cluster config to connect to apiserver
	2024/10/01 20:36:14 Using secret token for csrf signing
	2024/10/01 20:36:14 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/01 20:36:14 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/01 20:36:14 Successful initial request to the apiserver, version: v1.20.0
	2024/10/01 20:36:14 Generating JWE encryption key
	2024/10/01 20:36:14 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/01 20:36:14 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/01 20:36:14 Initializing JWE encryption key from synchronized object
	2024/10/01 20:36:14 Creating in-cluster Sidecar client
	2024/10/01 20:36:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/01 20:36:14 Serving insecurely on HTTP port: 9090
	2024/10/01 20:36:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/01 20:37:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/01 20:37:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/01 20:38:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/01 20:38:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/01 20:39:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/01 20:39:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/01 20:40:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/01 20:40:44 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/01 20:41:14 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/01 20:36:14 Starting overwatch
	
	
	==> storage-provisioner [24184e4a5873f0f6bdf9295c29297b18d84e08d613ec81403ea722addbb4c8f3] <==
	I1001 20:35:48.754951       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1001 20:36:18.757693       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [25fc4171ab662f81d7910b65a795b87eaee250a13f295f6d6ad7cdfb6eb53a10] <==
	I1001 20:36:33.476971       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1001 20:36:33.493255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1001 20:36:33.494146       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1001 20:36:50.979360       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1001 20:36:50.979638       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-992970_0783fc42-0ad4-462a-b96f-174847c7766d!
	I1001 20:36:50.979877       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e8c0b3bd-794a-486b-9a1b-378b1a87b285", APIVersion:"v1", ResourceVersion:"849", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-992970_0783fc42-0ad4-462a-b96f-174847c7766d became leader
	I1001 20:36:51.080771       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-992970_0783fc42-0ad4-462a-b96f-174847c7766d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-992970 -n old-k8s-version-992970
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-992970 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-g89nw
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-992970 describe pod metrics-server-9975d5f86-g89nw
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-992970 describe pod metrics-server-9975d5f86-g89nw: exit status 1 (104.080515ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-g89nw" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-992970 describe pod metrics-server-9975d5f86-g89nw: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (382.73s)

                                                
                                    

Test pass (298/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.88
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 9.43
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.53
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 211.78
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 16.1
34 TestAddons/parallel/Ingress 18.96
35 TestAddons/parallel/InspektorGadget 10.94
36 TestAddons/parallel/MetricsServer 6.83
38 TestAddons/parallel/CSI 40.34
39 TestAddons/parallel/Headlamp 15.71
40 TestAddons/parallel/CloudSpanner 6.56
41 TestAddons/parallel/LocalPath 8.54
42 TestAddons/parallel/NvidiaDevicePlugin 6.54
43 TestAddons/parallel/Yakd 11.91
44 TestAddons/StoppedEnableDisable 12.22
45 TestCertOptions 36.02
46 TestCertExpiration 228.75
48 TestForceSystemdFlag 32.6
49 TestForceSystemdEnv 42.64
50 TestDockerEnvContainerd 47.82
55 TestErrorSpam/setup 29.66
56 TestErrorSpam/start 0.67
57 TestErrorSpam/status 1.03
58 TestErrorSpam/pause 1.72
59 TestErrorSpam/unpause 1.79
60 TestErrorSpam/stop 1.46
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 87.99
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 5.67
67 TestFunctional/serial/KubeContext 0.06
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.01
72 TestFunctional/serial/CacheCmd/cache/add_local 1.19
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.93
77 TestFunctional/serial/CacheCmd/cache/delete 0.12
78 TestFunctional/serial/MinikubeKubectlCmd 0.13
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 46.12
81 TestFunctional/serial/ComponentHealth 0.09
82 TestFunctional/serial/LogsCmd 1.65
83 TestFunctional/serial/LogsFileCmd 1.79
84 TestFunctional/serial/InvalidService 4.19
86 TestFunctional/parallel/ConfigCmd 0.43
87 TestFunctional/parallel/DashboardCmd 7.67
88 TestFunctional/parallel/DryRun 0.47
89 TestFunctional/parallel/InternationalLanguage 0.25
90 TestFunctional/parallel/StatusCmd 1.01
94 TestFunctional/parallel/ServiceCmdConnect 12.62
95 TestFunctional/parallel/AddonsCmd 0.19
96 TestFunctional/parallel/PersistentVolumeClaim 25.23
98 TestFunctional/parallel/SSHCmd 0.67
99 TestFunctional/parallel/CpCmd 2.12
101 TestFunctional/parallel/FileSync 0.27
102 TestFunctional/parallel/CertSync 2.06
106 TestFunctional/parallel/NodeLabels 0.08
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
110 TestFunctional/parallel/License 0.29
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.42
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
124 TestFunctional/parallel/ProfileCmd/profile_list 0.39
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
126 TestFunctional/parallel/MountCmd/any-port 7.29
127 TestFunctional/parallel/ServiceCmd/List 0.59
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
130 TestFunctional/parallel/ServiceCmd/Format 0.39
131 TestFunctional/parallel/ServiceCmd/URL 0.35
132 TestFunctional/parallel/MountCmd/specific-port 2.04
133 TestFunctional/parallel/MountCmd/VerifyCleanup 2.44
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.26
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.7
141 TestFunctional/parallel/ImageCommands/Setup 0.83
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.23
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.46
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.35
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 132.84
159 TestMultiControlPlane/serial/DeployApp 37.44
160 TestMultiControlPlane/serial/PingHostFromPods 1.58
161 TestMultiControlPlane/serial/AddWorkerNode 20.42
162 TestMultiControlPlane/serial/NodeLabels 0.11
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.96
164 TestMultiControlPlane/serial/CopyFile 18.26
165 TestMultiControlPlane/serial/StopSecondaryNode 12.77
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
167 TestMultiControlPlane/serial/RestartSecondaryNode 18.02
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.98
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 149.36
170 TestMultiControlPlane/serial/DeleteSecondaryNode 9.59
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
172 TestMultiControlPlane/serial/StopCluster 35.99
173 TestMultiControlPlane/serial/RestartCluster 79.62
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
175 TestMultiControlPlane/serial/AddSecondaryNode 43.18
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.97
180 TestJSONOutput/start/Command 44.93
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.72
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.64
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.78
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.23
205 TestKicCustomNetwork/create_custom_network 35.88
206 TestKicCustomNetwork/use_default_bridge_network 32.55
207 TestKicExistingNetwork 33.73
208 TestKicCustomSubnet 32.38
209 TestKicStaticIP 31.18
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 68.52
214 TestMountStart/serial/StartWithMountFirst 6.52
215 TestMountStart/serial/VerifyMountFirst 0.25
216 TestMountStart/serial/StartWithMountSecond 6.43
217 TestMountStart/serial/VerifyMountSecond 0.25
218 TestMountStart/serial/DeleteFirst 1.61
219 TestMountStart/serial/VerifyMountPostDelete 0.25
220 TestMountStart/serial/Stop 1.2
221 TestMountStart/serial/RestartStopped 7.27
222 TestMountStart/serial/VerifyMountPostStop 0.26
225 TestMultiNode/serial/FreshStart2Nodes 63.44
226 TestMultiNode/serial/DeployApp2Nodes 18.03
227 TestMultiNode/serial/PingHostFrom2Pods 0.96
228 TestMultiNode/serial/AddNode 16.46
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.66
231 TestMultiNode/serial/CopyFile 9.6
232 TestMultiNode/serial/StopNode 2.21
233 TestMultiNode/serial/StartAfterStop 9.58
234 TestMultiNode/serial/RestartKeepsNodes 95.37
235 TestMultiNode/serial/DeleteNode 5.4
236 TestMultiNode/serial/StopMultiNode 23.96
237 TestMultiNode/serial/RestartMultiNode 49.97
238 TestMultiNode/serial/ValidateNameConflict 33.26
243 TestPreload 114.19
245 TestScheduledStopUnix 108.52
248 TestInsufficientStorage 10.61
249 TestRunningBinaryUpgrade 89.5
251 TestKubernetesUpgrade 99.86
252 TestMissingContainerUpgrade 182.33
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 38.87
256 TestNoKubernetes/serial/StartWithStopK8s 18.66
257 TestNoKubernetes/serial/Start 4.81
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
259 TestNoKubernetes/serial/ProfileList 0.94
260 TestNoKubernetes/serial/Stop 1.21
261 TestNoKubernetes/serial/StartNoArgs 6.49
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
263 TestStoppedBinaryUpgrade/Setup 0.7
264 TestStoppedBinaryUpgrade/Upgrade 139.73
273 TestPause/serial/Start 85.02
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.3
282 TestNetworkPlugins/group/false 3.85
283 TestPause/serial/SecondStartNoReconfiguration 6.77
287 TestPause/serial/Pause 0.88
288 TestPause/serial/VerifyStatus 0.39
289 TestPause/serial/Unpause 0.82
290 TestPause/serial/PauseAgain 1.06
291 TestPause/serial/DeletePaused 3.02
292 TestPause/serial/VerifyDeletedResources 0.47
294 TestStartStop/group/old-k8s-version/serial/FirstStart 151.16
295 TestStartStop/group/old-k8s-version/serial/DeployApp 10.09
297 TestStartStop/group/no-preload/serial/FirstStart 67.88
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.93
299 TestStartStop/group/old-k8s-version/serial/Stop 14.53
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.29
302 TestStartStop/group/no-preload/serial/DeployApp 9.41
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
304 TestStartStop/group/no-preload/serial/Stop 12.06
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
306 TestStartStop/group/no-preload/serial/SecondStart 265.84
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
310 TestStartStop/group/no-preload/serial/Pause 3.07
312 TestStartStop/group/embed-certs/serial/FirstStart 49.51
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
316 TestStartStop/group/old-k8s-version/serial/Pause 2.95
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 91.06
319 TestStartStop/group/embed-certs/serial/DeployApp 9.44
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.47
321 TestStartStop/group/embed-certs/serial/Stop 12.31
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
323 TestStartStop/group/embed-certs/serial/SecondStart 267.82
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.1
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.93
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
332 TestStartStop/group/embed-certs/serial/Pause 3.04
334 TestStartStop/group/newest-cni/serial/FirstStart 37.84
335 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.05
337 TestStartStop/group/newest-cni/serial/Stop 1.26
338 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
339 TestStartStop/group/newest-cni/serial/SecondStart 15.2
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
343 TestStartStop/group/newest-cni/serial/Pause 3.33
344 TestNetworkPlugins/group/auto/Start 94.59
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.68
349 TestNetworkPlugins/group/kindnet/Start 52.39
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
352 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
353 TestNetworkPlugins/group/auto/KubeletFlags 0.3
354 TestNetworkPlugins/group/auto/NetCatPod 9.31
355 TestNetworkPlugins/group/kindnet/DNS 0.25
356 TestNetworkPlugins/group/kindnet/Localhost 0.16
357 TestNetworkPlugins/group/kindnet/HairPin 0.18
358 TestNetworkPlugins/group/auto/DNS 0.22
359 TestNetworkPlugins/group/auto/Localhost 0.21
360 TestNetworkPlugins/group/auto/HairPin 0.21
361 TestNetworkPlugins/group/calico/Start 74.34
362 TestNetworkPlugins/group/custom-flannel/Start 60.43
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.32
365 TestNetworkPlugins/group/calico/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/KubeletFlags 0.26
367 TestNetworkPlugins/group/calico/NetCatPod 10.26
368 TestNetworkPlugins/group/custom-flannel/DNS 0.27
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
371 TestNetworkPlugins/group/calico/DNS 0.23
372 TestNetworkPlugins/group/calico/Localhost 0.22
373 TestNetworkPlugins/group/calico/HairPin 0.21
374 TestNetworkPlugins/group/enable-default-cni/Start 83.01
375 TestNetworkPlugins/group/flannel/Start 58.57
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
378 TestNetworkPlugins/group/flannel/NetCatPod 9.27
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.28
381 TestNetworkPlugins/group/flannel/DNS 0.19
382 TestNetworkPlugins/group/flannel/Localhost 0.2
383 TestNetworkPlugins/group/flannel/HairPin 0.15
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
387 TestNetworkPlugins/group/bridge/Start 41.6
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.26
389 TestNetworkPlugins/group/bridge/NetCatPod 10.31
390 TestNetworkPlugins/group/bridge/DNS 0.16
391 TestNetworkPlugins/group/bridge/Localhost 0.14
392 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (7.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-037780 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-037780 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.874844915s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1001 19:46:36.980415  741264 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1001 19:46:36.980512  741264 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-037780
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-037780: exit status 85 (71.203081ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-037780 | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC |          |
	|         | -p download-only-037780        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 19:46:29
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 19:46:29.153390  741269 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:46:29.153535  741269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:46:29.153547  741269 out.go:358] Setting ErrFile to fd 2...
	I1001 19:46:29.153552  741269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:46:29.153778  741269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
	W1001 19:46:29.153906  741269 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19736-735883/.minikube/config/config.json: open /home/jenkins/minikube-integration/19736-735883/.minikube/config/config.json: no such file or directory
	I1001 19:46:29.154282  741269 out.go:352] Setting JSON to true
	I1001 19:46:29.155171  741269 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12537,"bootTime":1727799453,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 19:46:29.155239  741269 start.go:139] virtualization:  
	I1001 19:46:29.158067  741269 out.go:97] [download-only-037780] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1001 19:46:29.158232  741269 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball: no such file or directory
	I1001 19:46:29.158293  741269 notify.go:220] Checking for updates...
	I1001 19:46:29.160297  741269 out.go:169] MINIKUBE_LOCATION=19736
	I1001 19:46:29.162221  741269 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:46:29.164056  741269 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	I1001 19:46:29.165892  741269 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	I1001 19:46:29.167417  741269 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1001 19:46:29.170690  741269 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 19:46:29.170953  741269 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:46:29.193580  741269 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 19:46:29.193690  741269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 19:46:29.252805  741269 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-01 19:46:29.242984464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 19:46:29.252924  741269 docker.go:318] overlay module found
	I1001 19:46:29.255057  741269 out.go:97] Using the docker driver based on user configuration
	I1001 19:46:29.255083  741269 start.go:297] selected driver: docker
	I1001 19:46:29.255089  741269 start.go:901] validating driver "docker" against <nil>
	I1001 19:46:29.255209  741269 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 19:46:29.316527  741269 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-01 19:46:29.307079909 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 19:46:29.316734  741269 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 19:46:29.317022  741269 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1001 19:46:29.317183  741269 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 19:46:29.319419  741269 out.go:169] Using Docker driver with root privileges
	I1001 19:46:29.321401  741269 cni.go:84] Creating CNI manager for ""
	I1001 19:46:29.321469  741269 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 19:46:29.321481  741269 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 19:46:29.321562  741269 start.go:340] cluster config:
	{Name:download-only-037780 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-037780 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:46:29.323399  741269 out.go:97] Starting "download-only-037780" primary control-plane node in "download-only-037780" cluster
	I1001 19:46:29.323427  741269 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1001 19:46:29.325097  741269 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1001 19:46:29.325122  741269 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1001 19:46:29.325287  741269 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 19:46:29.339963  741269 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 19:46:29.340699  741269 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 19:46:29.340805  741269 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 19:46:29.384517  741269 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1001 19:46:29.384543  741269 cache.go:56] Caching tarball of preloaded images
	I1001 19:46:29.384715  741269 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1001 19:46:29.387269  741269 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1001 19:46:29.387288  741269 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1001 19:46:29.476552  741269 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-037780 host does not exist
	  To start a cluster, run: "minikube start -p download-only-037780"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-037780
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (9.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-543885 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-543885 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.427152499s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (9.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1001 19:46:46.801015  741264 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I1001 19:46:46.801051  741264 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-543885
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-543885: exit status 85 (66.210403ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-037780 | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC |                     |
	|         | -p download-only-037780        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC | 01 Oct 24 19:46 UTC |
	| delete  | -p download-only-037780        | download-only-037780 | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC | 01 Oct 24 19:46 UTC |
	| start   | -o=json --download-only        | download-only-543885 | jenkins | v1.34.0 | 01 Oct 24 19:46 UTC |                     |
	|         | -p download-only-543885        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/01 19:46:37
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1001 19:46:37.418986  741465 out.go:345] Setting OutFile to fd 1 ...
	I1001 19:46:37.419206  741465 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:46:37.419232  741465 out.go:358] Setting ErrFile to fd 2...
	I1001 19:46:37.419252  741465 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 19:46:37.419531  741465 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
	I1001 19:46:37.419971  741465 out.go:352] Setting JSON to true
	I1001 19:46:37.420931  741465 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":12545,"bootTime":1727799453,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 19:46:37.421031  741465 start.go:139] virtualization:  
	I1001 19:46:37.423704  741465 out.go:97] [download-only-543885] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1001 19:46:37.423870  741465 notify.go:220] Checking for updates...
	I1001 19:46:37.426125  741465 out.go:169] MINIKUBE_LOCATION=19736
	I1001 19:46:37.428280  741465 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 19:46:37.430451  741465 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	I1001 19:46:37.432294  741465 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	I1001 19:46:37.434122  741465 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1001 19:46:37.437802  741465 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1001 19:46:37.438063  741465 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 19:46:37.462113  741465 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 19:46:37.462274  741465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 19:46:37.518452  741465 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-01 19:46:37.508689235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 19:46:37.518559  741465 docker.go:318] overlay module found
	I1001 19:46:37.520579  741465 out.go:97] Using the docker driver based on user configuration
	I1001 19:46:37.520616  741465 start.go:297] selected driver: docker
	I1001 19:46:37.520623  741465 start.go:901] validating driver "docker" against <nil>
	I1001 19:46:37.520730  741465 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 19:46:37.571944  741465 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-01 19:46:37.562974455 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 19:46:37.572185  741465 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1001 19:46:37.572579  741465 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1001 19:46:37.572742  741465 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1001 19:46:37.575233  741465 out.go:169] Using Docker driver with root privileges
	I1001 19:46:37.577480  741465 cni.go:84] Creating CNI manager for ""
	I1001 19:46:37.577545  741465 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1001 19:46:37.577561  741465 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1001 19:46:37.577645  741465 start.go:340] cluster config:
	{Name:download-only-543885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-543885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 19:46:37.580060  741465 out.go:97] Starting "download-only-543885" primary control-plane node in "download-only-543885" cluster
	I1001 19:46:37.580080  741465 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1001 19:46:37.582062  741465 out.go:97] Pulling base image v0.0.45-1727731891-master ...
	I1001 19:46:37.582093  741465 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1001 19:46:37.582137  741465 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local docker daemon
	I1001 19:46:37.596783  741465 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 to local cache
	I1001 19:46:37.596914  741465 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory
	I1001 19:46:37.596940  741465 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 in local cache directory, skipping pull
	I1001 19:46:37.596946  741465 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 exists in cache, skipping pull
	I1001 19:46:37.596957  741465 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 as a tarball
	I1001 19:46:37.645991  741465 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1001 19:46:37.646017  741465 cache.go:56] Caching tarball of preloaded images
	I1001 19:46:37.646197  741465 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1001 19:46:37.648117  741465 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1001 19:46:37.648161  741465 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1001 19:46:37.766024  741465 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19736-735883/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-543885 host does not exist
	  To start a cluster, run: "minikube start -p download-only-543885"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-543885
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
I1001 19:46:47.969216  741264 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-849941 --alsologtostderr --binary-mirror http://127.0.0.1:43939 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-849941" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-849941
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:932: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-164127
addons_test.go:932: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-164127: exit status 85 (75.64701ms)

                                                
                                                
-- stdout --
	* Profile "addons-164127" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-164127"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:943: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-164127
addons_test.go:943: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-164127: exit status 85 (73.559698ms)

                                                
                                                
-- stdout --
	* Profile "addons-164127" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-164127"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (211.78s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-164127 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-164127 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m31.782065001s)
--- PASS: TestAddons/Setup (211.78s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-164127 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-164127 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.209319ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-v9l9x" [74b85b16-903b-4a08-bdb8-9b3c7422ae07] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004301073s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kzs7s" [be6b07d5-273a-4cd0-897c-5e38dc0e0531] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003148511s
addons_test.go:331: (dbg) Run:  kubectl --context addons-164127 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-164127 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-164127 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.011404844s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 ip
2024/10/01 19:54:25 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.10s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-164127 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-164127 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-164127 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [49ef91e8-bb9e-4415-b338-9ce8db49e3a2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [49ef91e8-bb9e-4415-b338-9ce8db49e3a2] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003344249s
I1001 19:55:18.044113  741264 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-164127 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-164127 addons disable ingress-dns --alsologtostderr -v=1: (1.644528595s)
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 addons disable ingress --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-164127 addons disable ingress --alsologtostderr -v=1: (7.742710869s)
--- PASS: TestAddons/parallel/Ingress (18.96s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-m4sff" [ffc0e95d-4387-42a2-8e7b-4224b3651fd3] Running
addons_test.go:756: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004728891s
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-164127 addons disable inspektor-gadget --alsologtostderr -v=1: (5.934355472s)
--- PASS: TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.225258ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-d5z7g" [2cb020f5-d6d4-43bf-b189-8c27fde55bde] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003938757s
addons_test.go:402: (dbg) Run:  kubectl --context addons-164127 top pods -n kube-system
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.34s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1001 19:54:22.223693  741264 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1001 19:54:22.228592  741264 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1001 19:54:22.228830  741264 kapi.go:107] duration metric: took 5.145921ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.320564ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-164127 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-164127 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c6a53e5d-d5ff-4c18-94eb-9cfa442ac037] Pending
helpers_test.go:344: "task-pv-pod" [c6a53e5d-d5ff-4c18-94eb-9cfa442ac037] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c6a53e5d-d5ff-4c18-94eb-9cfa442ac037] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003340717s
addons_test.go:511: (dbg) Run:  kubectl --context addons-164127 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-164127 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-164127 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-164127 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-164127 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-164127 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-164127 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [028c8d13-3c70-4a77-9ad7-1b7143688475] Pending
helpers_test.go:344: "task-pv-pod-restore" [028c8d13-3c70-4a77-9ad7-1b7143688475] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [028c8d13-3c70-4a77-9ad7-1b7143688475] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003907831s
addons_test.go:553: (dbg) Run:  kubectl --context addons-164127 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-164127 delete pod task-pv-pod-restore: (1.421975723s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-164127 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-164127 delete volumesnapshot new-snapshot-demo
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-164127 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.878247008s)
--- PASS: TestAddons/parallel/CSI (40.34s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:741: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-164127 --alsologtostderr -v=1
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-lljgv" [13a51c48-e5eb-49f7-ab60-a85fd01a136d] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-lljgv" [13a51c48-e5eb-49f7-ab60-a85fd01a136d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-lljgv" [13a51c48-e5eb-49f7-ab60-a85fd01a136d] Running
addons_test.go:746: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.004371579s
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 addons disable headlamp --alsologtostderr -v=1
addons_test.go:977: (dbg) Done: out/minikube-linux-arm64 -p addons-164127 addons disable headlamp --alsologtostderr -v=1: (5.721901969s)
--- PASS: TestAddons/parallel/Headlamp (15.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-qxsnh" [11ced5c3-52a0-4f91-b146-0e145262de2f] Running
addons_test.go:773: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00351841s
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.54s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:881: (dbg) Run:  kubectl --context addons-164127 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:887: (dbg) Run:  kubectl --context addons-164127 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:891: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-164127 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9fc22590-4202-47ac-953f-47f4673741ce] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9fc22590-4202-47ac-953f-47f4673741ce] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9fc22590-4202-47ac-953f-47f4673741ce] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003091843s
addons_test.go:899: (dbg) Run:  kubectl --context addons-164127 get pvc test-pvc -o=json
addons_test.go:908: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 ssh "cat /opt/local-path-provisioner/pvc-67c3ae0d-1766-438b-90f5-22fe98dbb036_default_test-pvc/file1"
addons_test.go:920: (dbg) Run:  kubectl --context addons-164127 delete pod test-local-path
addons_test.go:924: (dbg) Run:  kubectl --context addons-164127 delete pvc test-pvc
addons_test.go:977: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.54s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-79kbd" [324167bb-5d6b-4381-b1b2-61d389cb657d] Running
addons_test.go:956: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004448535s
addons_test.go:959: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-164127
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-5784p" [cb2eaf99-582f-4053-8782-76c7f51be5b8] Running
addons_test.go:967: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00399754s
addons_test.go:971: (dbg) Run:  out/minikube-linux-arm64 -p addons-164127 addons disable yakd --alsologtostderr -v=1
addons_test.go:971: (dbg) Done: out/minikube-linux-arm64 -p addons-164127 addons disable yakd --alsologtostderr -v=1: (5.901475167s)
--- PASS: TestAddons/parallel/Yakd (11.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-164127
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-164127: (11.960853959s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-164127
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-164127
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-164127
--- PASS: TestAddons/StoppedEnableDisable (12.22s)

                                                
                                    
x
+
TestCertOptions (36.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-232828 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-232828 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.419892901s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-232828 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-232828 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-232828 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-232828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-232828
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-232828: (1.973600222s)
--- PASS: TestCertOptions (36.02s)

                                                
                                    
x
+
TestCertExpiration (228.75s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-253225 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-253225 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.996384147s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-253225 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-253225 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.54788528s)
helpers_test.go:175: Cleaning up "cert-expiration-253225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-253225
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-253225: (2.205885524s)
--- PASS: TestCertExpiration (228.75s)

                                                
                                    
x
+
TestForceSystemdFlag (32.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-791520 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1001 20:30:20.382616  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-791520 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (30.354843842s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-791520 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-791520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-791520
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-791520: (1.961097436s)
--- PASS: TestForceSystemdFlag (32.60s)

                                                
                                    
x
+
TestForceSystemdEnv (42.64s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-822393 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-822393 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.813410214s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-822393 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-822393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-822393
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-822393: (2.390684791s)
--- PASS: TestForceSystemdEnv (42.64s)

                                                
                                    
x
+
TestDockerEnvContainerd (47.82s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-336737 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-336737 --driver=docker  --container-runtime=containerd: (32.505081561s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-336737"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-qSwudeAGjV4m/agent.763799" SSH_AGENT_PID="763800" DOCKER_HOST=ssh://docker@127.0.0.1:33538 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-qSwudeAGjV4m/agent.763799" SSH_AGENT_PID="763800" DOCKER_HOST=ssh://docker@127.0.0.1:33538 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-qSwudeAGjV4m/agent.763799" SSH_AGENT_PID="763800" DOCKER_HOST=ssh://docker@127.0.0.1:33538 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.134992864s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-qSwudeAGjV4m/agent.763799" SSH_AGENT_PID="763800" DOCKER_HOST=ssh://docker@127.0.0.1:33538 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-336737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-336737
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-336737: (1.894679627s)
--- PASS: TestDockerEnvContainerd (47.82s)

                                                
                                    
x
+
TestErrorSpam/setup (29.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-643999 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-643999 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-643999 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-643999 --driver=docker  --container-runtime=containerd: (29.662646871s)
--- PASS: TestErrorSpam/setup (29.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 start --dry-run
--- PASS: TestErrorSpam/start (0.67s)

                                                
                                    
x
+
TestErrorSpam/status (1.03s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 status
--- PASS: TestErrorSpam/status (1.03s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 unpause
--- PASS: TestErrorSpam/unpause (1.79s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 stop: (1.280454434s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-643999 --log_dir /tmp/nospam-643999 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19736-735883/.minikube/files/etc/test/nested/copy/741264/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (87.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-988381 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-988381 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m27.993336531s)
--- PASS: TestFunctional/serial/StartWithProxy (87.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1001 19:58:42.561472  741264 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-988381 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-988381 --alsologtostderr -v=8: (5.667647493s)
functional_test.go:663: soft start took 5.668271906s for "functional-988381" cluster.
I1001 19:58:48.229474  741264 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (5.67s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-988381 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-988381 cache add registry.k8s.io/pause:3.1: (1.428646373s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-988381 cache add registry.k8s.io/pause:3.3: (1.38700505s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-988381 cache add registry.k8s.io/pause:latest: (1.191009369s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.01s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-988381 /tmp/TestFunctionalserialCacheCmdcacheadd_local3987664731/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 cache add minikube-local-cache-test:functional-988381
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 cache delete minikube-local-cache-test:functional-988381
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-988381
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-988381 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (276.617435ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-988381 cache reload: (1.056265623s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 kubectl -- --context functional-988381 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-988381 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-988381 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-988381 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.124275123s)
functional_test.go:761: restart took 46.124379737s for "functional-988381" cluster.
I1001 19:59:42.436388  741264 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (46.12s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-988381 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-988381 logs: (1.651508394s)
--- PASS: TestFunctional/serial/LogsCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 logs --file /tmp/TestFunctionalserialLogsFileCmd4245033753/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-988381 logs --file /tmp/TestFunctionalserialLogsFileCmd4245033753/001/logs.txt: (1.788111072s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-988381 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-988381
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-988381: exit status 115 (427.919052ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32076 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-988381 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-988381 config get cpus: exit status 14 (80.623028ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-988381 config get cpus: exit status 14 (65.622754ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-988381 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-988381 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 778755: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-988381 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-988381 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (255.055845ms)

                                                
                                                
-- stdout --
	* [functional-988381] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:00:23.135927  778305 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:00:23.136175  778305 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:00:23.136202  778305 out.go:358] Setting ErrFile to fd 2...
	I1001 20:00:23.136223  778305 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:00:23.136532  778305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
	I1001 20:00:23.136931  778305 out.go:352] Setting JSON to false
	I1001 20:00:23.137940  778305 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13371,"bootTime":1727799453,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 20:00:23.138036  778305 start.go:139] virtualization:  
	I1001 20:00:23.143724  778305 out.go:177] * [functional-988381] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1001 20:00:23.145706  778305 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:00:23.145770  778305 notify.go:220] Checking for updates...
	I1001 20:00:23.150138  778305 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:00:23.151943  778305 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	I1001 20:00:23.153766  778305 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	I1001 20:00:23.156298  778305 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 20:00:23.163131  778305 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:00:23.169945  778305 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 20:00:23.170647  778305 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:00:23.209788  778305 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 20:00:23.209907  778305 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 20:00:23.287821  778305 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-01 20:00:23.270973901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 20:00:23.287927  778305 docker.go:318] overlay module found
	I1001 20:00:23.290385  778305 out.go:177] * Using the docker driver based on existing profile
	I1001 20:00:23.292110  778305 start.go:297] selected driver: docker
	I1001 20:00:23.292124  778305 start.go:901] validating driver "docker" against &{Name:functional-988381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-988381 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:00:23.292231  778305 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:00:23.294734  778305 out.go:201] 
	W1001 20:00:23.298979  778305 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1001 20:00:23.300819  778305 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-988381 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-988381 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
E1001 20:00:22.953102  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-988381 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (244.745964ms)

                                                
                                                
-- stdout --
	* [functional-988381] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:00:22.867920  778241 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:00:22.868085  778241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:00:22.868114  778241 out.go:358] Setting ErrFile to fd 2...
	I1001 20:00:22.868121  778241 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:00:22.872722  778241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
	I1001 20:00:22.873226  778241 out.go:352] Setting JSON to false
	I1001 20:00:22.874246  778241 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13370,"bootTime":1727799453,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 20:00:22.874352  778241 start.go:139] virtualization:  
	I1001 20:00:22.877479  778241 out.go:177] * [functional-988381] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1001 20:00:22.879669  778241 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:00:22.879736  778241 notify.go:220] Checking for updates...
	I1001 20:00:22.885151  778241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:00:22.888055  778241 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	I1001 20:00:22.890379  778241 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	I1001 20:00:22.892305  778241 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 20:00:22.894684  778241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:00:22.897118  778241 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 20:00:22.897741  778241 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:00:22.950558  778241 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 20:00:22.950693  778241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 20:00:23.035363  778241 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-01 20:00:23.016667792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 20:00:23.035474  778241 docker.go:318] overlay module found
	I1001 20:00:23.037661  778241 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1001 20:00:23.039456  778241 start.go:297] selected driver: docker
	I1001 20:00:23.039473  778241 start.go:901] validating driver "docker" against &{Name:functional-988381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727731891-master@sha256:d66dfd4a976cf0b4581cac255174cef4031588c4570fa4a795e0b3d42edc9122 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-988381 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1001 20:00:23.039587  778241 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:00:23.041839  778241 out.go:201] 
	W1001 20:00:23.043554  778241 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1001 20:00:23.045312  778241 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-988381 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-988381 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-nxr5x" [5c49ad2c-0ed4-4f25-bc42-972f03651c88] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-nxr5x" [5c49ad2c-0ed4-4f25-bc42-972f03651c88] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.003854785s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30988
functional_test.go:1675: http://192.168.49.2:30988: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-nxr5x

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30988
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2e303a51-7d5c-4721-b2ff-11560113c5c1] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004101073s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-988381 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-988381 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-988381 get pvc myclaim -o=json
I1001 19:59:57.649779  741264 retry.go:31] will retry after 1.593920513s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:f72b144d-d194-498d-9491-8dac34b9622a ResourceVersion:661 Generation:0 CreationTimestamp:2024-10-01 19:59:57 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0x40004fe5d0 VolumeMode:0x40004fe600 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-988381 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-988381 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [85f32d35-e182-4599-8f93-e06b1744e855] Pending
helpers_test.go:344: "sp-pod" [85f32d35-e182-4599-8f93-e06b1744e855] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [85f32d35-e182-4599-8f93-e06b1744e855] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003890792s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-988381 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-988381 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-988381 delete -f testdata/storage-provisioner/pod.yaml: (1.577873364s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-988381 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fac308a0-bbb3-4c21-af3a-13fd4849d4b3] Pending
helpers_test.go:344: "sp-pod" [fac308a0-bbb3-4c21-af3a-13fd4849d4b3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003821454s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-988381 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh -n functional-988381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 cp functional-988381:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3851008905/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh -n functional-988381 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh -n functional-988381 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/741264/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "sudo cat /etc/test/nested/copy/741264/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/741264.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "sudo cat /etc/ssl/certs/741264.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/741264.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "sudo cat /usr/share/ca-certificates/741264.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7412642.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "sudo cat /etc/ssl/certs/7412642.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7412642.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "sudo cat /usr/share/ca-certificates/7412642.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-988381 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "sudo systemctl is-active docker"
2024/10/01 20:00:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-988381 ssh "sudo systemctl is-active docker": exit status 1 (311.790533ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-988381 ssh "sudo systemctl is-active crio": exit status 1 (278.870911ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
E1001 20:00:30.637398  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-988381 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-988381 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-988381 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-988381 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 775900: os: process already finished
helpers_test.go:502: unable to terminate pid 775710: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-988381 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-988381 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [427d7bad-bcad-4fb6-97a7-850922301a24] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [427d7bad-bcad-4fb6-97a7-850922301a24] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.007179824s
I1001 20:00:00.256940  741264 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-988381 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.69.208 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-988381 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-988381 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-988381 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-z5nfb" [495b39b1-321c-4299-a216-918c0d6b82df] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-z5nfb" [495b39b1-321c-4299-a216-918c0d6b82df] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.005334291s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "339.654631ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "52.282499ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "320.237518ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "50.023029ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-988381 /tmp/TestFunctionalparallelMountCmdany-port954909469/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727812818716295578" to /tmp/TestFunctionalparallelMountCmdany-port954909469/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727812818716295578" to /tmp/TestFunctionalparallelMountCmdany-port954909469/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727812818716295578" to /tmp/TestFunctionalparallelMountCmdany-port954909469/001/test-1727812818716295578
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-988381 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (326.375674ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 20:00:19.042967  741264 retry.go:31] will retry after 436.922433ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  1 20:00 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  1 20:00 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  1 20:00 test-1727812818716295578
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh cat /mount-9p/test-1727812818716295578
E1001 20:00:20.383320  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:00:20.389717  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:00:20.401143  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:00:20.422486  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-988381 replace --force -f testdata/busybox-mount-test.yaml
E1001 20:00:20.463740  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:00:20.545722  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d45288aa-2d8e-420b-8e63-e2a04735dab0] Pending
helpers_test.go:344: "busybox-mount" [d45288aa-2d8e-420b-8e63-e2a04735dab0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1001 20:00:21.670855  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox-mount" [d45288aa-2d8e-420b-8e63-e2a04735dab0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d45288aa-2d8e-420b-8e63-e2a04735dab0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004545608s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-988381 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh stat /mount-9p/created-by-pod
E1001 20:00:25.515540  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-988381 /tmp/TestFunctionalparallelMountCmdany-port954909469/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 service list -o json
functional_test.go:1494: Took "578.099695ms" to run "out/minikube-linux-arm64 -p functional-988381 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 service --namespace=default --https --url hello-node
E1001 20:00:20.707381  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:00:21.029045  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1522: found endpoint: https://192.168.49.2:32086
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32086
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-988381 /tmp/TestFunctionalparallelMountCmdspecific-port3632688374/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-988381 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.07252ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 20:00:26.384104  741264 retry.go:31] will retry after 478.271629ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-988381 /tmp/TestFunctionalparallelMountCmdspecific-port3632688374/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-988381 ssh "sudo umount -f /mount-9p": exit status 1 (307.207814ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-988381 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-988381 /tmp/TestFunctionalparallelMountCmdspecific-port3632688374/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-988381 /tmp/TestFunctionalparallelMountCmdVerifyCleanup752262623/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-988381 /tmp/TestFunctionalparallelMountCmdVerifyCleanup752262623/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-988381 /tmp/TestFunctionalparallelMountCmdVerifyCleanup752262623/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-988381 ssh "findmnt -T" /mount1: exit status 1 (930.31478ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1001 20:00:28.976429  741264 retry.go:31] will retry after 530.348592ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-988381 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-988381 /tmp/TestFunctionalparallelMountCmdVerifyCleanup752262623/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-988381 /tmp/TestFunctionalparallelMountCmdVerifyCleanup752262623/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-988381 /tmp/TestFunctionalparallelMountCmdVerifyCleanup752262623/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-988381 version -o=json --components: (1.256424195s)
--- PASS: TestFunctional/parallel/Version/components (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-988381 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-988381
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-988381
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-988381 image ls --format short --alsologtostderr:
I1001 20:00:38.096920  781175 out.go:345] Setting OutFile to fd 1 ...
I1001 20:00:38.097110  781175 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 20:00:38.097122  781175 out.go:358] Setting ErrFile to fd 2...
I1001 20:00:38.097128  781175 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 20:00:38.097409  781175 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
I1001 20:00:38.098066  781175 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 20:00:38.098233  781175 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 20:00:38.098743  781175 cli_runner.go:164] Run: docker container inspect functional-988381 --format={{.State.Status}}
I1001 20:00:38.127454  781175 ssh_runner.go:195] Run: systemctl --version
I1001 20:00:38.127513  781175 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-988381
I1001 20:00:38.146976  781175 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33548 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/functional-988381/id_rsa Username:docker}
I1001 20:00:38.241783  781175 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-988381 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-988381  | sha256:ce2d2c | 2.17MB |
| docker.io/library/minikube-local-cache-test | functional-988381  | sha256:063fb2 | 992B   |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| docker.io/library/nginx                     | latest             | sha256:6e8672 | 67.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-988381 image ls --format table --alsologtostderr:
I1001 20:00:38.655433  781328 out.go:345] Setting OutFile to fd 1 ...
I1001 20:00:38.656847  781328 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 20:00:38.656868  781328 out.go:358] Setting ErrFile to fd 2...
I1001 20:00:38.656890  781328 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 20:00:38.657582  781328 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
I1001 20:00:38.658345  781328 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 20:00:38.658753  781328 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 20:00:38.659281  781328 cli_runner.go:164] Run: docker container inspect functional-988381 --format={{.State.Status}}
I1001 20:00:38.692809  781328 ssh_runner.go:195] Run: systemctl --version
I1001 20:00:38.692866  781328 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-988381
I1001 20:00:38.723226  781328 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33548 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/functional-988381/id_rsa Username:docker}
I1001 20:00:38.821156  781328 ssh_runner.go:195] Run: sudo crictl images --output json
E1001 20:00:40.879287  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-988381 image ls --format json --alsologtostderr:
[{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:063fb2096e0d1b124f21010308641758145b8d7369d1aaaede991d6584124920","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-988381"],"size":"992"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535
646"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b952
9bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-988381"],"size":"2173567"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["regist
ry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:b887aca7aed6134b0294
01507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2","repoDigests":["docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb"],"repoTags":["docker.io/library/nginx:latest"],"size":"67693717"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-988381 image ls --format json --alsologtostderr:
I1001 20:00:38.388751  781245 out.go:345] Setting OutFile to fd 1 ...
I1001 20:00:38.392984  781245 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 20:00:38.393036  781245 out.go:358] Setting ErrFile to fd 2...
I1001 20:00:38.393058  781245 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 20:00:38.393373  781245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
I1001 20:00:38.394127  781245 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 20:00:38.394300  781245 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 20:00:38.394829  781245 cli_runner.go:164] Run: docker container inspect functional-988381 --format={{.State.Status}}
I1001 20:00:38.422535  781245 ssh_runner.go:195] Run: systemctl --version
I1001 20:00:38.422589  781245 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-988381
I1001 20:00:38.444840  781245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33548 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/functional-988381/id_rsa Username:docker}
I1001 20:00:38.541713  781245 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-988381 image ls --format yaml --alsologtostderr:
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-988381
size: "2173567"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:063fb2096e0d1b124f21010308641758145b8d7369d1aaaede991d6584124920
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-988381
size: "992"
- id: sha256:6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2
repoDigests:
- docker.io/library/nginx@sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb
repoTags:
- docker.io/library/nginx:latest
size: "67693717"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-988381 image ls --format yaml --alsologtostderr:
I1001 20:00:38.091901  781176 out.go:345] Setting OutFile to fd 1 ...
I1001 20:00:38.092117  781176 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 20:00:38.092139  781176 out.go:358] Setting ErrFile to fd 2...
I1001 20:00:38.092157  781176 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 20:00:38.092431  781176 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
I1001 20:00:38.093164  781176 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 20:00:38.093386  781176 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 20:00:38.093970  781176 cli_runner.go:164] Run: docker container inspect functional-988381 --format={{.State.Status}}
I1001 20:00:38.109584  781176 ssh_runner.go:195] Run: systemctl --version
I1001 20:00:38.109636  781176 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-988381
I1001 20:00:38.125906  781176 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33548 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/functional-988381/id_rsa Username:docker}
I1001 20:00:38.216830  781176 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-988381 ssh pgrep buildkitd: exit status 1 (321.95143ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image build -t localhost/my-image:functional-988381 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-988381 image build -t localhost/my-image:functional-988381 testdata/build --alsologtostderr: (3.154193243s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-988381 image build -t localhost/my-image:functional-988381 testdata/build --alsologtostderr:
I1001 20:00:38.654581  781329 out.go:345] Setting OutFile to fd 1 ...
I1001 20:00:38.655509  781329 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 20:00:38.655522  781329 out.go:358] Setting ErrFile to fd 2...
I1001 20:00:38.655528  781329 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1001 20:00:38.655794  781329 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
I1001 20:00:38.656489  781329 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 20:00:38.657737  781329 config.go:182] Loaded profile config "functional-988381": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1001 20:00:38.658254  781329 cli_runner.go:164] Run: docker container inspect functional-988381 --format={{.State.Status}}
I1001 20:00:38.690102  781329 ssh_runner.go:195] Run: systemctl --version
I1001 20:00:38.690156  781329 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-988381
I1001 20:00:38.709200  781329 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33548 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/functional-988381/id_rsa Username:docker}
I1001 20:00:38.816668  781329 build_images.go:161] Building image from path: /tmp/build.1860448261.tar
I1001 20:00:38.816734  781329 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1001 20:00:38.826446  781329 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1860448261.tar
I1001 20:00:38.830236  781329 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1860448261.tar: stat -c "%s %y" /var/lib/minikube/build/build.1860448261.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1860448261.tar': No such file or directory
I1001 20:00:38.830266  781329 ssh_runner.go:362] scp /tmp/build.1860448261.tar --> /var/lib/minikube/build/build.1860448261.tar (3072 bytes)
I1001 20:00:38.862616  781329 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1860448261
I1001 20:00:38.873431  781329 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1860448261 -xf /var/lib/minikube/build/build.1860448261.tar
I1001 20:00:38.883982  781329 containerd.go:394] Building image: /var/lib/minikube/build/build.1860448261
I1001 20:00:38.884065  781329 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1860448261 --local dockerfile=/var/lib/minikube/build/build.1860448261 --output type=image,name=localhost/my-image:functional-988381
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:34d78433e9c34debd3256b9a6e69cdc2f20bb6451054ec40ecdd5fcae44271f4 0.0s done
#8 exporting config sha256:39d2a58b3109234c0ba226215fe0f213b7b6cd1cad3376f3fe315799d01c6603 0.0s done
#8 naming to localhost/my-image:functional-988381 done
#8 DONE 0.1s
I1001 20:00:41.714257  781329 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1860448261 --local dockerfile=/var/lib/minikube/build/build.1860448261 --output type=image,name=localhost/my-image:functional-988381: (2.830159864s)
I1001 20:00:41.714323  781329 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1860448261
I1001 20:00:41.730509  781329 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1860448261.tar
I1001 20:00:41.741532  781329 build_images.go:217] Built localhost/my-image:functional-988381 from /tmp/build.1860448261.tar
I1001 20:00:41.741558  781329 build_images.go:133] succeeded building to: functional-988381
I1001 20:00:41.741563  781329 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-988381
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image load --daemon kicbase/echo-server:functional-988381 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-988381 image load --daemon kicbase/echo-server:functional-988381 --alsologtostderr: (1.184277746s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image load --daemon kicbase/echo-server:functional-988381 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-988381
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image load --daemon kicbase/echo-server:functional-988381 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image save kicbase/echo-server:functional-988381 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image rm kicbase/echo-server:functional-988381 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-988381
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-988381 image save --daemon kicbase/echo-server:functional-988381 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-988381
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-988381
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-988381
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-988381
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (132.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-792355 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1001 20:01:01.361001  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:01:42.323113  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-792355 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m12.061011312s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (132.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (37.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- rollout status deployment/busybox
E1001 20:03:04.245944  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-792355 -- rollout status deployment/busybox: (34.603852377s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-9g9kr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-qkfhb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-sn98s -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-9g9kr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-qkfhb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-sn98s -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-9g9kr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-qkfhb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-sn98s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (37.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-9g9kr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-9g9kr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-qkfhb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-qkfhb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-sn98s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-792355 -- exec busybox-7dff88458-sn98s -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (20.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-792355 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-792355 -v=7 --alsologtostderr: (19.466978281s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (20.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-792355 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp testdata/cp-test.txt ha-792355:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile599467240/001/cp-test_ha-792355.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355:/home/docker/cp-test.txt ha-792355-m02:/home/docker/cp-test_ha-792355_ha-792355-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m02 "sudo cat /home/docker/cp-test_ha-792355_ha-792355-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355:/home/docker/cp-test.txt ha-792355-m03:/home/docker/cp-test_ha-792355_ha-792355-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m03 "sudo cat /home/docker/cp-test_ha-792355_ha-792355-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355:/home/docker/cp-test.txt ha-792355-m04:/home/docker/cp-test_ha-792355_ha-792355-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m04 "sudo cat /home/docker/cp-test_ha-792355_ha-792355-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp testdata/cp-test.txt ha-792355-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile599467240/001/cp-test_ha-792355-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355-m02:/home/docker/cp-test.txt ha-792355:/home/docker/cp-test_ha-792355-m02_ha-792355.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355 "sudo cat /home/docker/cp-test_ha-792355-m02_ha-792355.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355-m02:/home/docker/cp-test.txt ha-792355-m03:/home/docker/cp-test_ha-792355-m02_ha-792355-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m03 "sudo cat /home/docker/cp-test_ha-792355-m02_ha-792355-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355-m02:/home/docker/cp-test.txt ha-792355-m04:/home/docker/cp-test_ha-792355-m02_ha-792355-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m04 "sudo cat /home/docker/cp-test_ha-792355-m02_ha-792355-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp testdata/cp-test.txt ha-792355-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile599467240/001/cp-test_ha-792355-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355-m03:/home/docker/cp-test.txt ha-792355:/home/docker/cp-test_ha-792355-m03_ha-792355.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355 "sudo cat /home/docker/cp-test_ha-792355-m03_ha-792355.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355-m03:/home/docker/cp-test.txt ha-792355-m02:/home/docker/cp-test_ha-792355-m03_ha-792355-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m02 "sudo cat /home/docker/cp-test_ha-792355-m03_ha-792355-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355-m03:/home/docker/cp-test.txt ha-792355-m04:/home/docker/cp-test_ha-792355-m03_ha-792355-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m04 "sudo cat /home/docker/cp-test_ha-792355-m03_ha-792355-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp testdata/cp-test.txt ha-792355-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile599467240/001/cp-test_ha-792355-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355-m04:/home/docker/cp-test.txt ha-792355:/home/docker/cp-test_ha-792355-m04_ha-792355.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355 "sudo cat /home/docker/cp-test_ha-792355-m04_ha-792355.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355-m04:/home/docker/cp-test.txt ha-792355-m02:/home/docker/cp-test_ha-792355-m04_ha-792355-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m02 "sudo cat /home/docker/cp-test_ha-792355-m04_ha-792355-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 cp ha-792355-m04:/home/docker/cp-test.txt ha-792355-m03:/home/docker/cp-test_ha-792355-m04_ha-792355-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 ssh -n ha-792355-m03 "sudo cat /home/docker/cp-test_ha-792355-m04_ha-792355-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-792355 node stop m02 -v=7 --alsologtostderr: (12.024333877s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-792355 status -v=7 --alsologtostderr: exit status 7 (740.912315ms)

                                                
                                                
-- stdout --
	ha-792355
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-792355-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-792355-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-792355-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:04:28.341662  797600 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:04:28.341773  797600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:04:28.341782  797600 out.go:358] Setting ErrFile to fd 2...
	I1001 20:04:28.341788  797600 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:04:28.342017  797600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
	I1001 20:04:28.342192  797600 out.go:352] Setting JSON to false
	I1001 20:04:28.342233  797600 mustload.go:65] Loading cluster: ha-792355
	I1001 20:04:28.342622  797600 notify.go:220] Checking for updates...
	I1001 20:04:28.343657  797600 config.go:182] Loaded profile config "ha-792355": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 20:04:28.343714  797600 status.go:174] checking status of ha-792355 ...
	I1001 20:04:28.344357  797600 cli_runner.go:164] Run: docker container inspect ha-792355 --format={{.State.Status}}
	I1001 20:04:28.367128  797600 status.go:371] ha-792355 host status = "Running" (err=<nil>)
	I1001 20:04:28.367191  797600 host.go:66] Checking if "ha-792355" exists ...
	I1001 20:04:28.367507  797600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-792355
	I1001 20:04:28.395967  797600 host.go:66] Checking if "ha-792355" exists ...
	I1001 20:04:28.396277  797600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 20:04:28.396326  797600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-792355
	I1001 20:04:28.418254  797600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33553 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/ha-792355/id_rsa Username:docker}
	I1001 20:04:28.517978  797600 ssh_runner.go:195] Run: systemctl --version
	I1001 20:04:28.522066  797600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:04:28.533639  797600 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 20:04:28.598005  797600 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-01 20:04:28.581936881 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 20:04:28.598578  797600 kubeconfig.go:125] found "ha-792355" server: "https://192.168.49.254:8443"
	I1001 20:04:28.598630  797600 api_server.go:166] Checking apiserver status ...
	I1001 20:04:28.598691  797600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:04:28.610701  797600 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1471/cgroup
	I1001 20:04:28.622839  797600 api_server.go:182] apiserver freezer: "3:freezer:/docker/45ff45e3174963165dc8ac128136640a47fc496daa802dacb7758225a4979f83/kubepods/burstable/pod29b461bf505f6ecad21ad0884c5578ac/c4b28574da9170c2fb1f2db99f28bdfdaac7840552c34bb606022cade5465a95"
	I1001 20:04:28.622913  797600 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/45ff45e3174963165dc8ac128136640a47fc496daa802dacb7758225a4979f83/kubepods/burstable/pod29b461bf505f6ecad21ad0884c5578ac/c4b28574da9170c2fb1f2db99f28bdfdaac7840552c34bb606022cade5465a95/freezer.state
	I1001 20:04:28.633001  797600 api_server.go:204] freezer state: "THAWED"
	I1001 20:04:28.633030  797600 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1001 20:04:28.640807  797600 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1001 20:04:28.640837  797600 status.go:463] ha-792355 apiserver status = Running (err=<nil>)
	I1001 20:04:28.640847  797600 status.go:176] ha-792355 status: &{Name:ha-792355 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 20:04:28.640863  797600 status.go:174] checking status of ha-792355-m02 ...
	I1001 20:04:28.641158  797600 cli_runner.go:164] Run: docker container inspect ha-792355-m02 --format={{.State.Status}}
	I1001 20:04:28.658304  797600 status.go:371] ha-792355-m02 host status = "Stopped" (err=<nil>)
	I1001 20:04:28.658349  797600 status.go:384] host is not running, skipping remaining checks
	I1001 20:04:28.658357  797600 status.go:176] ha-792355-m02 status: &{Name:ha-792355-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 20:04:28.658390  797600 status.go:174] checking status of ha-792355-m03 ...
	I1001 20:04:28.658689  797600 cli_runner.go:164] Run: docker container inspect ha-792355-m03 --format={{.State.Status}}
	I1001 20:04:28.683728  797600 status.go:371] ha-792355-m03 host status = "Running" (err=<nil>)
	I1001 20:04:28.683753  797600 host.go:66] Checking if "ha-792355-m03" exists ...
	I1001 20:04:28.684047  797600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-792355-m03
	I1001 20:04:28.702110  797600 host.go:66] Checking if "ha-792355-m03" exists ...
	I1001 20:04:28.702516  797600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 20:04:28.702564  797600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-792355-m03
	I1001 20:04:28.720648  797600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33563 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/ha-792355-m03/id_rsa Username:docker}
	I1001 20:04:28.813569  797600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:04:28.825883  797600 kubeconfig.go:125] found "ha-792355" server: "https://192.168.49.254:8443"
	I1001 20:04:28.825912  797600 api_server.go:166] Checking apiserver status ...
	I1001 20:04:28.825954  797600 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:04:28.836364  797600 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1326/cgroup
	I1001 20:04:28.846616  797600 api_server.go:182] apiserver freezer: "3:freezer:/docker/bba7b9a9ac2adb2a3dc49bf52e753c12d19bc624409ced35194587f316560b4f/kubepods/burstable/pod16aba498cf83566fb2ea22e62bdfa2f0/95ad455d0984e7297201b27723f88a491d189899ca3334a0a92af52859ffdbc9"
	I1001 20:04:28.846733  797600 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bba7b9a9ac2adb2a3dc49bf52e753c12d19bc624409ced35194587f316560b4f/kubepods/burstable/pod16aba498cf83566fb2ea22e62bdfa2f0/95ad455d0984e7297201b27723f88a491d189899ca3334a0a92af52859ffdbc9/freezer.state
	I1001 20:04:28.855877  797600 api_server.go:204] freezer state: "THAWED"
	I1001 20:04:28.855916  797600 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1001 20:04:28.863674  797600 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1001 20:04:28.863704  797600 status.go:463] ha-792355-m03 apiserver status = Running (err=<nil>)
	I1001 20:04:28.863713  797600 status.go:176] ha-792355-m03 status: &{Name:ha-792355-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 20:04:28.863730  797600 status.go:174] checking status of ha-792355-m04 ...
	I1001 20:04:28.864042  797600 cli_runner.go:164] Run: docker container inspect ha-792355-m04 --format={{.State.Status}}
	I1001 20:04:28.881281  797600 status.go:371] ha-792355-m04 host status = "Running" (err=<nil>)
	I1001 20:04:28.881307  797600 host.go:66] Checking if "ha-792355-m04" exists ...
	I1001 20:04:28.881597  797600 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-792355-m04
	I1001 20:04:28.900266  797600 host.go:66] Checking if "ha-792355-m04" exists ...
	I1001 20:04:28.900616  797600 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 20:04:28.900656  797600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-792355-m04
	I1001 20:04:28.916832  797600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33568 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/ha-792355-m04/id_rsa Username:docker}
	I1001 20:04:29.009940  797600 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:04:29.021383  797600 status.go:176] ha-792355-m04 status: &{Name:ha-792355-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-792355 node start m02 -v=7 --alsologtostderr: (16.97072361s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 status -v=7 --alsologtostderr
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (149.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-792355 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-792355 -v=7 --alsologtostderr
E1001 20:04:51.838751  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:04:51.845011  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:04:51.857071  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:04:51.878481  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:04:51.919869  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:04:52.001289  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:04:52.162650  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:04:52.484260  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:04:53.125705  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:04:54.407221  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:04:56.968583  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:05:02.090026  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:05:12.331547  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:05:20.383075  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-792355 -v=7 --alsologtostderr: (37.115483407s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-792355 --wait=true -v=7 --alsologtostderr
E1001 20:05:32.813393  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:05:48.087450  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:06:13.775336  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-792355 --wait=true -v=7 --alsologtostderr: (1m52.103936376s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-792355
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (149.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-792355 node delete m03 -v=7 --alsologtostderr: (8.677703785s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 stop -v=7 --alsologtostderr
E1001 20:07:35.700044  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-792355 stop -v=7 --alsologtostderr: (35.881890358s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-792355 status -v=7 --alsologtostderr: exit status 7 (109.678839ms)

                                                
                                                
-- stdout --
	ha-792355
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-792355-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-792355-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:08:04.380073  811991 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:08:04.380217  811991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:08:04.380228  811991 out.go:358] Setting ErrFile to fd 2...
	I1001 20:08:04.380234  811991 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:08:04.380531  811991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
	I1001 20:08:04.380761  811991 out.go:352] Setting JSON to false
	I1001 20:08:04.380797  811991 mustload.go:65] Loading cluster: ha-792355
	I1001 20:08:04.380892  811991 notify.go:220] Checking for updates...
	I1001 20:08:04.381232  811991 config.go:182] Loaded profile config "ha-792355": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 20:08:04.381246  811991 status.go:174] checking status of ha-792355 ...
	I1001 20:08:04.381803  811991 cli_runner.go:164] Run: docker container inspect ha-792355 --format={{.State.Status}}
	I1001 20:08:04.400241  811991 status.go:371] ha-792355 host status = "Stopped" (err=<nil>)
	I1001 20:08:04.400265  811991 status.go:384] host is not running, skipping remaining checks
	I1001 20:08:04.400272  811991 status.go:176] ha-792355 status: &{Name:ha-792355 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 20:08:04.400295  811991 status.go:174] checking status of ha-792355-m02 ...
	I1001 20:08:04.400669  811991 cli_runner.go:164] Run: docker container inspect ha-792355-m02 --format={{.State.Status}}
	I1001 20:08:04.424539  811991 status.go:371] ha-792355-m02 host status = "Stopped" (err=<nil>)
	I1001 20:08:04.424560  811991 status.go:384] host is not running, skipping remaining checks
	I1001 20:08:04.424567  811991 status.go:176] ha-792355-m02 status: &{Name:ha-792355-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 20:08:04.424586  811991 status.go:174] checking status of ha-792355-m04 ...
	I1001 20:08:04.424892  811991 cli_runner.go:164] Run: docker container inspect ha-792355-m04 --format={{.State.Status}}
	I1001 20:08:04.444367  811991 status.go:371] ha-792355-m04 host status = "Stopped" (err=<nil>)
	I1001 20:08:04.444395  811991 status.go:384] host is not running, skipping remaining checks
	I1001 20:08:04.444403  811991 status.go:176] ha-792355-m04 status: &{Name:ha-792355-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-792355 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-792355 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.63223646s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-792355 --control-plane -v=7 --alsologtostderr
E1001 20:09:51.837801  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-792355 --control-plane -v=7 --alsologtostderr: (42.216832574s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-792355 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.97s)

                                                
                                    
x
+
TestJSONOutput/start/Command (44.93s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-723117 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1001 20:10:19.541934  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:10:20.383096  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-723117 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (44.925119996s)
--- PASS: TestJSONOutput/start/Command (44.93s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.72s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-723117 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.72s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-723117 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-723117 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-723117 --output=json --user=testUser: (5.78109192s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-859309 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-859309 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.065614ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"568c5261-86b8-4dfa-83d4-80b430d56fa3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-859309] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3f446141-98fb-48aa-8bba-300b897cfbc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19736"}}
	{"specversion":"1.0","id":"83c27b2e-b021-4153-946f-be972bbf489e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3a805a48-7d63-400e-8f48-c292d0fc01cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig"}}
	{"specversion":"1.0","id":"81d9b155-0700-4cf3-930a-b589eb0d3e5a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube"}}
	{"specversion":"1.0","id":"eb5bf0e8-b90b-4431-9683-a82ba324e9af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c6302dee-157d-460d-a422-cde26e8691b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8f065253-3ba5-45e9-a69b-f8b1cfe98ac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-859309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-859309
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.88s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-796144 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-796144 --network=: (33.823078916s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-796144" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-796144
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-796144: (2.037764911s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.88s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-311648 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-311648 --network=bridge: (30.657690789s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-311648" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-311648
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-311648: (1.859306842s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.55s)

                                                
                                    
x
+
TestKicExistingNetwork (33.73s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1001 20:12:21.518182  741264 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1001 20:12:21.533719  741264 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1001 20:12:21.534574  741264 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1001 20:12:21.534612  741264 cli_runner.go:164] Run: docker network inspect existing-network
W1001 20:12:21.549057  741264 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1001 20:12:21.549084  741264 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1001 20:12:21.549098  741264 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1001 20:12:21.549195  741264 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1001 20:12:21.565716  741264 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2a68ee21f9af IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a0:14:b4:5c} reservation:<nil>}
I1001 20:12:21.566091  741264 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bf9ef0}
I1001 20:12:21.566113  741264 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1001 20:12:21.566165  741264 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1001 20:12:21.631177  741264 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-201457 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-201457 --network=existing-network: (31.632494676s)
helpers_test.go:175: Cleaning up "existing-network-201457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-201457
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-201457: (1.95333339s)
I1001 20:12:55.237443  741264 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.73s)

                                                
                                    
x
+
TestKicCustomSubnet (32.38s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-669185 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-669185 --subnet=192.168.60.0/24: (30.38990064s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-669185 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-669185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-669185
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-669185: (1.961515658s)
--- PASS: TestKicCustomSubnet (32.38s)

                                                
                                    
x
+
TestKicStaticIP (31.18s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-139833 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-139833 --static-ip=192.168.200.200: (29.04529777s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-139833 ip
helpers_test.go:175: Cleaning up "static-ip-139833" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-139833
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-139833: (1.9982969s)
--- PASS: TestKicStaticIP (31.18s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-093729 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-093729 --driver=docker  --container-runtime=containerd: (29.037300495s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-096469 --driver=docker  --container-runtime=containerd
E1001 20:14:51.838336  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-096469 --driver=docker  --container-runtime=containerd: (34.033319799s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-093729
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-096469
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-096469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-096469
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-096469: (1.953419946s)
helpers_test.go:175: Cleaning up "first-093729" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-093729
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-093729: (2.171905916s)
--- PASS: TestMinikubeProfile (68.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.52s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-424703 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-424703 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.517318768s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.52s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-424703 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-426648 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-426648 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.432995558s)
E1001 20:15:20.383114  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMountStart/serial/StartWithMountSecond (6.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-426648 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-424703 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-424703 --alsologtostderr -v=5: (1.606172325s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-426648 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-426648
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-426648: (1.198274332s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.27s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-426648
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-426648: (6.269395754s)
--- PASS: TestMountStart/serial/RestartStopped (7.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-426648 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (63.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-518807 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-518807 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m2.951262247s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (63.44s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- rollout status deployment/busybox
E1001 20:16:43.448844  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-518807 -- rollout status deployment/busybox: (16.231960998s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- exec busybox-7dff88458-gx96f -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- exec busybox-7dff88458-w4f76 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- exec busybox-7dff88458-gx96f -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- exec busybox-7dff88458-w4f76 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- exec busybox-7dff88458-gx96f -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- exec busybox-7dff88458-w4f76 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- exec busybox-7dff88458-gx96f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- exec busybox-7dff88458-gx96f -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- exec busybox-7dff88458-w4f76 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-518807 -- exec busybox-7dff88458-w4f76 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.96s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-518807 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-518807 -v 3 --alsologtostderr: (15.795098681s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.46s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-518807 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.66s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 cp testdata/cp-test.txt multinode-518807:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 cp multinode-518807:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2657967281/001/cp-test_multinode-518807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 cp multinode-518807:/home/docker/cp-test.txt multinode-518807-m02:/home/docker/cp-test_multinode-518807_multinode-518807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807-m02 "sudo cat /home/docker/cp-test_multinode-518807_multinode-518807-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 cp multinode-518807:/home/docker/cp-test.txt multinode-518807-m03:/home/docker/cp-test_multinode-518807_multinode-518807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807-m03 "sudo cat /home/docker/cp-test_multinode-518807_multinode-518807-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 cp testdata/cp-test.txt multinode-518807-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 cp multinode-518807-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2657967281/001/cp-test_multinode-518807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 cp multinode-518807-m02:/home/docker/cp-test.txt multinode-518807:/home/docker/cp-test_multinode-518807-m02_multinode-518807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807 "sudo cat /home/docker/cp-test_multinode-518807-m02_multinode-518807.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 cp multinode-518807-m02:/home/docker/cp-test.txt multinode-518807-m03:/home/docker/cp-test_multinode-518807-m02_multinode-518807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807-m03 "sudo cat /home/docker/cp-test_multinode-518807-m02_multinode-518807-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 cp testdata/cp-test.txt multinode-518807-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 cp multinode-518807-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2657967281/001/cp-test_multinode-518807-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 cp multinode-518807-m03:/home/docker/cp-test.txt multinode-518807:/home/docker/cp-test_multinode-518807-m03_multinode-518807.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807 "sudo cat /home/docker/cp-test_multinode-518807-m03_multinode-518807.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 cp multinode-518807-m03:/home/docker/cp-test.txt multinode-518807-m02:/home/docker/cp-test_multinode-518807-m03_multinode-518807-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 ssh -n multinode-518807-m02 "sudo cat /home/docker/cp-test_multinode-518807-m03_multinode-518807-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-518807 node stop m03: (1.21177647s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-518807 status: exit status 7 (502.719166ms)

                                                
                                                
-- stdout --
	multinode-518807
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-518807-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-518807-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-518807 status --alsologtostderr: exit status 7 (497.455804ms)

                                                
                                                
-- stdout --
	multinode-518807
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-518807-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-518807-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:17:24.176955  865501 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:17:24.177154  865501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:17:24.177180  865501 out.go:358] Setting ErrFile to fd 2...
	I1001 20:17:24.177201  865501 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:17:24.177486  865501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
	I1001 20:17:24.177712  865501 out.go:352] Setting JSON to false
	I1001 20:17:24.177773  865501 mustload.go:65] Loading cluster: multinode-518807
	I1001 20:17:24.177858  865501 notify.go:220] Checking for updates...
	I1001 20:17:24.178278  865501 config.go:182] Loaded profile config "multinode-518807": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 20:17:24.178318  865501 status.go:174] checking status of multinode-518807 ...
	I1001 20:17:24.178977  865501 cli_runner.go:164] Run: docker container inspect multinode-518807 --format={{.State.Status}}
	I1001 20:17:24.199601  865501 status.go:371] multinode-518807 host status = "Running" (err=<nil>)
	I1001 20:17:24.199649  865501 host.go:66] Checking if "multinode-518807" exists ...
	I1001 20:17:24.199987  865501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-518807
	I1001 20:17:24.230354  865501 host.go:66] Checking if "multinode-518807" exists ...
	I1001 20:17:24.230663  865501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 20:17:24.230707  865501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-518807
	I1001 20:17:24.248963  865501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33673 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/multinode-518807/id_rsa Username:docker}
	I1001 20:17:24.341490  865501 ssh_runner.go:195] Run: systemctl --version
	I1001 20:17:24.345867  865501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:17:24.357464  865501 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 20:17:24.413267  865501 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-01 20:17:24.403026593 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 20:17:24.413904  865501 kubeconfig.go:125] found "multinode-518807" server: "https://192.168.67.2:8443"
	I1001 20:17:24.413939  865501 api_server.go:166] Checking apiserver status ...
	I1001 20:17:24.413983  865501 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1001 20:17:24.425234  865501 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup
	I1001 20:17:24.434682  865501 api_server.go:182] apiserver freezer: "3:freezer:/docker/dd315862697d266510804976b61d3322f38477bfea6ad6e91bb4afc70842f620/kubepods/burstable/poda4cd65ca30ea0550a1f7848da933b1f9/09eb05bfa6769790689d3148c4d6516c38785d8ae740a74e3ce22bb10af6f10d"
	I1001 20:17:24.434765  865501 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/dd315862697d266510804976b61d3322f38477bfea6ad6e91bb4afc70842f620/kubepods/burstable/poda4cd65ca30ea0550a1f7848da933b1f9/09eb05bfa6769790689d3148c4d6516c38785d8ae740a74e3ce22bb10af6f10d/freezer.state
	I1001 20:17:24.443708  865501 api_server.go:204] freezer state: "THAWED"
	I1001 20:17:24.443746  865501 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1001 20:17:24.451796  865501 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1001 20:17:24.451877  865501 status.go:463] multinode-518807 apiserver status = Running (err=<nil>)
	I1001 20:17:24.451903  865501 status.go:176] multinode-518807 status: &{Name:multinode-518807 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 20:17:24.451952  865501 status.go:174] checking status of multinode-518807-m02 ...
	I1001 20:17:24.452303  865501 cli_runner.go:164] Run: docker container inspect multinode-518807-m02 --format={{.State.Status}}
	I1001 20:17:24.468368  865501 status.go:371] multinode-518807-m02 host status = "Running" (err=<nil>)
	I1001 20:17:24.468395  865501 host.go:66] Checking if "multinode-518807-m02" exists ...
	I1001 20:17:24.468731  865501 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-518807-m02
	I1001 20:17:24.485425  865501 host.go:66] Checking if "multinode-518807-m02" exists ...
	I1001 20:17:24.485756  865501 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1001 20:17:24.485811  865501 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-518807-m02
	I1001 20:17:24.502114  865501 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33678 SSHKeyPath:/home/jenkins/minikube-integration/19736-735883/.minikube/machines/multinode-518807-m02/id_rsa Username:docker}
	I1001 20:17:24.593353  865501 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1001 20:17:24.604643  865501 status.go:176] multinode-518807-m02 status: &{Name:multinode-518807-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1001 20:17:24.604677  865501 status.go:174] checking status of multinode-518807-m03 ...
	I1001 20:17:24.605004  865501 cli_runner.go:164] Run: docker container inspect multinode-518807-m03 --format={{.State.Status}}
	I1001 20:17:24.621639  865501 status.go:371] multinode-518807-m03 host status = "Stopped" (err=<nil>)
	I1001 20:17:24.621663  865501 status.go:384] host is not running, skipping remaining checks
	I1001 20:17:24.621671  865501 status.go:176] multinode-518807-m03 status: &{Name:multinode-518807-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-518807 node start m03 -v=7 --alsologtostderr: (8.83538332s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (95.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-518807
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-518807
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-518807: (24.941576706s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-518807 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-518807 --wait=true -v=8 --alsologtostderr: (1m10.296213217s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-518807
--- PASS: TestMultiNode/serial/RestartKeepsNodes (95.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-518807 node delete m03: (4.775302648s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-518807 stop: (23.773618258s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-518807 status: exit status 7 (88.331938ms)

                                                
                                                
-- stdout --
	multinode-518807
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-518807-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-518807 status --alsologtostderr: exit status 7 (95.221237ms)

                                                
                                                
-- stdout --
	multinode-518807
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-518807-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:19:38.872172  873933 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:19:38.872289  873933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:19:38.872299  873933 out.go:358] Setting ErrFile to fd 2...
	I1001 20:19:38.872305  873933 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:19:38.872594  873933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
	I1001 20:19:38.872784  873933 out.go:352] Setting JSON to false
	I1001 20:19:38.872819  873933 mustload.go:65] Loading cluster: multinode-518807
	I1001 20:19:38.872924  873933 notify.go:220] Checking for updates...
	I1001 20:19:38.873242  873933 config.go:182] Loaded profile config "multinode-518807": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 20:19:38.873256  873933 status.go:174] checking status of multinode-518807 ...
	I1001 20:19:38.874062  873933 cli_runner.go:164] Run: docker container inspect multinode-518807 --format={{.State.Status}}
	I1001 20:19:38.890611  873933 status.go:371] multinode-518807 host status = "Stopped" (err=<nil>)
	I1001 20:19:38.890636  873933 status.go:384] host is not running, skipping remaining checks
	I1001 20:19:38.890643  873933 status.go:176] multinode-518807 status: &{Name:multinode-518807 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1001 20:19:38.890670  873933 status.go:174] checking status of multinode-518807-m02 ...
	I1001 20:19:38.890972  873933 cli_runner.go:164] Run: docker container inspect multinode-518807-m02 --format={{.State.Status}}
	I1001 20:19:38.917844  873933 status.go:371] multinode-518807-m02 host status = "Stopped" (err=<nil>)
	I1001 20:19:38.917869  873933 status.go:384] host is not running, skipping remaining checks
	I1001 20:19:38.917876  873933 status.go:176] multinode-518807-m02 status: &{Name:multinode-518807-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-518807 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1001 20:19:51.838030  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:20:20.382572  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-518807 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.32505094s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-518807 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-518807
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-518807-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-518807-m02 --driver=docker  --container-runtime=containerd: exit status 14 (75.937069ms)

                                                
                                                
-- stdout --
	* [multinode-518807-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-518807-m02' is duplicated with machine name 'multinode-518807-m02' in profile 'multinode-518807'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-518807-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-518807-m03 --driver=docker  --container-runtime=containerd: (30.841800589s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-518807
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-518807: exit status 80 (331.576031ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-518807 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-518807-m03 already exists in multinode-518807-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-518807-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-518807-m03: (1.967028835s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.26s)

                                                
                                    
x
+
TestPreload (114.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-957767 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1001 20:21:14.903935  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-957767 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m16.330121807s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-957767 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-957767 image pull gcr.io/k8s-minikube/busybox: (2.13322849s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-957767
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-957767: (12.089352356s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-957767 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-957767 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.846568607s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-957767 image list
helpers_test.go:175: Cleaning up "test-preload-957767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-957767
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-957767: (2.424730692s)
--- PASS: TestPreload (114.19s)

                                                
                                    
x
+
TestScheduledStopUnix (108.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-744999 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-744999 --memory=2048 --driver=docker  --container-runtime=containerd: (32.598539382s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-744999 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-744999 -n scheduled-stop-744999
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-744999 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1001 20:23:33.277930  741264 retry.go:31] will retry after 139.903µs: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.278414  741264 retry.go:31] will retry after 215.704µs: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.279534  741264 retry.go:31] will retry after 252.709µs: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.280626  741264 retry.go:31] will retry after 381.565µs: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.281751  741264 retry.go:31] will retry after 254.987µs: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.282827  741264 retry.go:31] will retry after 1.053448ms: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.284898  741264 retry.go:31] will retry after 1.276242ms: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.287100  741264 retry.go:31] will retry after 1.694801ms: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.289308  741264 retry.go:31] will retry after 2.386563ms: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.292521  741264 retry.go:31] will retry after 3.23039ms: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.296891  741264 retry.go:31] will retry after 4.763135ms: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.303444  741264 retry.go:31] will retry after 5.923987ms: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.309705  741264 retry.go:31] will retry after 17.566806ms: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.327931  741264 retry.go:31] will retry after 25.935637ms: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
I1001 20:23:33.354102  741264 retry.go:31] will retry after 43.162311ms: open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/scheduled-stop-744999/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-744999 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-744999 -n scheduled-stop-744999
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-744999
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-744999 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-744999
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-744999: exit status 7 (63.885089ms)

                                                
                                                
-- stdout --
	scheduled-stop-744999
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-744999 -n scheduled-stop-744999
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-744999 -n scheduled-stop-744999: exit status 7 (64.352212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-744999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-744999
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-744999: (4.427876969s)
--- PASS: TestScheduledStopUnix (108.52s)

                                                
                                    
x
+
TestInsufficientStorage (10.61s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-792772 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E1001 20:24:51.838308  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-792772 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.184701716s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"befdf59b-9c08-42aa-9aec-0ff062c255ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-792772] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3e84c74-4549-4b07-9db8-6fcb7ef80b91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19736"}}
	{"specversion":"1.0","id":"384dcd58-fde1-4b7d-a033-25678fa4c9ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8573772b-c4bb-4c02-a358-dafa6f6656be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig"}}
	{"specversion":"1.0","id":"76617c85-7a50-4687-b537-49fc075412c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube"}}
	{"specversion":"1.0","id":"b8b5c9a2-26e2-462d-8a05-2592f1f268ea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e3a92eaf-e99c-444d-b880-82898cb8f0a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"056cb5b6-777b-4e5f-931b-a03c1661328b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b6131bbb-a1e3-423c-8089-6ec5c5dc3f6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"4ac7156a-1641-4746-b0b7-6c0d6c6bf523","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"876df636-d712-4605-aa5b-be744c6103e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"ebbcbea8-03fa-4137-aa49-b23c38990934","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-792772\" primary control-plane node in \"insufficient-storage-792772\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6eadaee-33a0-450a-844b-2cd6ca18a4ba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727731891-master ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2e96523c-5f0f-4a2f-9039-74c3c21e445c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"35856cdc-ae72-4d79-8169-23cc4ff7ec47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-792772 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-792772 --output=json --layout=cluster: exit status 7 (276.275926ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-792772","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-792772","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:24:57.150210  892480 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-792772" does not appear in /home/jenkins/minikube-integration/19736-735883/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-792772 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-792772 --output=json --layout=cluster: exit status 7 (271.387675ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-792772","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-792772","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1001 20:24:57.420550  892542 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-792772" does not appear in /home/jenkins/minikube-integration/19736-735883/kubeconfig
	E1001 20:24:57.430246  892542 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/insufficient-storage-792772/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-792772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-792772
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-792772: (1.87393826s)
--- PASS: TestInsufficientStorage (10.61s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (89.5s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4198997525 start -p running-upgrade-553038 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4198997525 start -p running-upgrade-553038 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.902866464s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-553038 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-553038 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.966893055s)
helpers_test.go:175: Cleaning up "running-upgrade-553038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-553038
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-553038: (3.436794084s)
--- PASS: TestRunningBinaryUpgrade (89.50s)

                                                
                                    
x
+
TestKubernetesUpgrade (99.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-712196 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-712196 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (58.789051897s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-712196
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-712196: (1.212757306s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-712196 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-712196 status --format={{.Host}}: exit status 7 (65.53058ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-712196 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-712196 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (28.764579886s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-712196 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-712196 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-712196 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (96.842482ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-712196] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-712196
	    minikube start -p kubernetes-upgrade-712196 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7121962 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-712196 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-712196 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-712196 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.242850581s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-712196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-712196
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-712196: (2.578392607s)
--- PASS: TestKubernetesUpgrade (99.86s)

                                                
                                    
x
+
TestMissingContainerUpgrade (182.33s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2953571612 start -p missing-upgrade-922578 --memory=2200 --driver=docker  --container-runtime=containerd
E1001 20:25:20.383051  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2953571612 start -p missing-upgrade-922578 --memory=2200 --driver=docker  --container-runtime=containerd: (1m35.052630904s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-922578
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-922578
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-922578 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-922578 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m20.507498839s)
helpers_test.go:175: Cleaning up "missing-upgrade-922578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-922578
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-922578: (4.963031451s)
--- PASS: TestMissingContainerUpgrade (182.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-957859 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-957859 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (78.73075ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-957859] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-957859 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-957859 --driver=docker  --container-runtime=containerd: (38.327921039s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-957859 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-957859 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-957859 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.388223723s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-957859 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-957859 status -o json: exit status 2 (346.06052ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-957859","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-957859
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-957859: (1.927190433s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-957859 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-957859 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.812728454s)
--- PASS: TestNoKubernetes/serial/Start (4.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-957859 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-957859 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.798575ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-957859
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-957859: (1.209855328s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-957859 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-957859 --driver=docker  --container-runtime=containerd: (6.486179065s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-957859 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-957859 "sudo systemctl is-active --quiet service kubelet": exit status 1 (305.984701ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (139.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.367240262 start -p stopped-upgrade-778722 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.367240262 start -p stopped-upgrade-778722 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (53.420415834s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.367240262 -p stopped-upgrade-778722 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.367240262 -p stopped-upgrade-778722 stop: (20.710292718s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-778722 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-778722 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m5.60054732s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (139.73s)

                                                
                                    
x
+
TestPause/serial/Start (85.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-023668 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1001 20:29:51.838290  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-023668 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m25.020653536s)
--- PASS: TestPause/serial/Start (85.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-778722
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-778722: (1.298472551s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-272394 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-272394 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (175.119196ms)

                                                
                                                
-- stdout --
	* [false-272394] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19736
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1001 20:30:52.610661  927132 out.go:345] Setting OutFile to fd 1 ...
	I1001 20:30:52.610805  927132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:30:52.610842  927132 out.go:358] Setting ErrFile to fd 2...
	I1001 20:30:52.610853  927132 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1001 20:30:52.611107  927132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19736-735883/.minikube/bin
	I1001 20:30:52.611514  927132 out.go:352] Setting JSON to false
	I1001 20:30:52.612519  927132 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":15200,"bootTime":1727799453,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I1001 20:30:52.612635  927132 start.go:139] virtualization:  
	I1001 20:30:52.615161  927132 out.go:177] * [false-272394] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1001 20:30:52.617723  927132 out.go:177]   - MINIKUBE_LOCATION=19736
	I1001 20:30:52.617791  927132 notify.go:220] Checking for updates...
	I1001 20:30:52.622003  927132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1001 20:30:52.624789  927132 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19736-735883/kubeconfig
	I1001 20:30:52.627290  927132 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19736-735883/.minikube
	I1001 20:30:52.629245  927132 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1001 20:30:52.631667  927132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1001 20:30:52.634272  927132 config.go:182] Loaded profile config "pause-023668": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1001 20:30:52.634426  927132 driver.go:394] Setting default libvirt URI to qemu:///system
	I1001 20:30:52.666444  927132 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1001 20:30:52.666603  927132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1001 20:30:52.722482  927132 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-01 20:30:52.712284305 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1001 20:30:52.722592  927132 docker.go:318] overlay module found
	I1001 20:30:52.725197  927132 out.go:177] * Using the docker driver based on user configuration
	I1001 20:30:52.727085  927132 start.go:297] selected driver: docker
	I1001 20:30:52.727104  927132 start.go:901] validating driver "docker" against <nil>
	I1001 20:30:52.727127  927132 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1001 20:30:52.730176  927132 out.go:201] 
	W1001 20:30:52.732265  927132 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1001 20:30:52.734200  927132 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-272394 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-272394

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-272394

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-272394

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-272394

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-272394

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-272394

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-272394

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-272394

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-272394

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-272394

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-272394

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-272394" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-272394" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19736-735883/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Oct 2024 20:30:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-023668
contexts:
- context:
cluster: pause-023668
extensions:
- extension:
last-update: Tue, 01 Oct 2024 20:30:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-023668
name: pause-023668
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-023668
user:
client-certificate: /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/pause-023668/client.crt
client-key: /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/pause-023668/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-272394

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-272394"

                                                
                                                
----------------------- debugLogs end: false-272394 [took: 3.435820926s] --------------------------------
helpers_test.go:175: Cleaning up "false-272394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-272394
--- PASS: TestNetworkPlugins/group/false (3.85s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.77s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-023668 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-023668 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.750019107s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.77s)

                                                
                                    
x
+
TestPause/serial/Pause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-023668 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-023668 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-023668 --output=json --layout=cluster: exit status 2 (389.365392ms)

                                                
                                                
-- stdout --
	{"Name":"pause-023668","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-023668","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.39s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-023668 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.82s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.06s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-023668 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-023668 --alsologtostderr -v=5: (1.059176086s)
--- PASS: TestPause/serial/PauseAgain (1.06s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.02s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-023668 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-023668 --alsologtostderr -v=5: (3.01699791s)
--- PASS: TestPause/serial/DeletePaused (3.02s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-023668
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-023668: exit status 1 (14.800225ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-023668: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (151.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-992970 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1001 20:33:23.450570  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-992970 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m31.161706192s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (151.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-992970 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [99f86444-def1-4874-9454-3fa15519f26d] Pending
E1001 20:34:51.838059  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [99f86444-def1-4874-9454-3fa15519f26d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [99f86444-def1-4874-9454-3fa15519f26d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.022128573s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-992970 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-381888 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-381888 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m7.878987085s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-992970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-992970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.664160221s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-992970 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-992970 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-992970 --alsologtostderr -v=3: (14.5295554s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-992970 -n old-k8s-version-992970
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-992970 -n old-k8s-version-992970: exit status 7 (129.82573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-992970 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-381888 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fa543b20-1a59-40fc-bcad-ba73f681af4b] Pending
helpers_test.go:344: "busybox" [fa543b20-1a59-40fc-bcad-ba73f681af4b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fa543b20-1a59-40fc-bcad-ba73f681af4b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003591535s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-381888 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-381888 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-381888 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.000356648s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-381888 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-381888 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-381888 --alsologtostderr -v=3: (12.059342995s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-381888 -n no-preload-381888
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-381888 -n no-preload-381888: exit status 7 (73.566455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-381888 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (265.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-381888 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1001 20:37:54.905575  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:39:51.838689  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:40:20.383114  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-381888 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m25.494758189s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-381888 -n no-preload-381888
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (265.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lfkwv" [17eaf505-db95-433d-8556-f31a98aa6361] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003930897s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lfkwv" [17eaf505-db95-433d-8556-f31a98aa6361] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003968285s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-381888 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-381888 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-381888 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-381888 -n no-preload-381888
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-381888 -n no-preload-381888: exit status 2 (309.764181ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-381888 -n no-preload-381888
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-381888 -n no-preload-381888: exit status 2 (308.233918ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-381888 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-381888 -n no-preload-381888
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-381888 -n no-preload-381888
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-734252 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-734252 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (49.512110946s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-sbvn5" [360ceffe-f30f-4492-8c74-885a476e0932] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004444662s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-sbvn5" [360ceffe-f30f-4492-8c74-885a476e0932] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003796786s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-992970 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-992970 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-992970 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-992970 -n old-k8s-version-992970
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-992970 -n old-k8s-version-992970: exit status 2 (315.241786ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-992970 -n old-k8s-version-992970
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-992970 -n old-k8s-version-992970: exit status 2 (306.319023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-992970 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-992970 -n old-k8s-version-992970
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-992970 -n old-k8s-version-992970
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-663983 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-663983 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m31.061848134s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (91.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-734252 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c469179f-5540-4bb8-9985-5e6c38a4f787] Pending
helpers_test.go:344: "busybox" [c469179f-5540-4bb8-9985-5e6c38a4f787] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c469179f-5540-4bb8-9985-5e6c38a4f787] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.006237535s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-734252 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-734252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-734252 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.318383555s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-734252 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-734252 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-734252 --alsologtostderr -v=3: (12.307429412s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-734252 -n embed-certs-734252
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-734252 -n embed-certs-734252: exit status 7 (135.441328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-734252 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.82s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-734252 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-734252 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.470808404s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-734252 -n embed-certs-734252
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.82s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-663983 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cdf4808c-de72-41ff-96f3-57786e58aa8d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cdf4808c-de72-41ff-96f3-57786e58aa8d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005180966s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-663983 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-663983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-663983 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-663983 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-663983 --alsologtostderr -v=3: (12.098097342s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-663983 -n default-k8s-diff-port-663983
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-663983 -n default-k8s-diff-port-663983: exit status 7 (82.238613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-663983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-663983 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1001 20:44:51.633712  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:51.640064  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:51.651499  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:51.673014  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:51.714401  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:51.795782  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:51.838494  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:51.958024  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:52.279586  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:52.921547  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:54.203448  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:44:56.764801  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:45:01.887085  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:45:12.129010  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:45:20.382824  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/addons-164127/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:45:32.610344  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:06.627703  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:06.637812  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:06.649207  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:06.670602  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:06.712003  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:06.794223  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:06.955808  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:07.277415  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:07.918812  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:09.200310  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:11.761873  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:13.571744  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:16.883827  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:27.125897  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:46:47.607291  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-663983 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.602316803s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-663983 -n default-k8s-diff-port-663983
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gt49z" [edc15776-9ecb-42a6-b596-c59c06705462] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003922352s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gt49z" [edc15776-9ecb-42a6-b596-c59c06705462] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00377178s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-734252 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-734252 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-734252 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-734252 -n embed-certs-734252
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-734252 -n embed-certs-734252: exit status 2 (306.558146ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-734252 -n embed-certs-734252
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-734252 -n embed-certs-734252: exit status 2 (308.153704ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-734252 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-734252 -n embed-certs-734252
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-734252 -n embed-certs-734252
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-765180 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1001 20:47:28.569249  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:47:35.493512  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-765180 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (37.841753347s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-765180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-765180 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.04696202s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-765180 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-765180 --alsologtostderr -v=3: (1.258146285s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-765180 -n newest-cni-765180
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-765180 -n newest-cni-765180: exit status 7 (68.532971ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-765180 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-765180 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-765180 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (14.73594421s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-765180 -n newest-cni-765180
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-765180 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-765180 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-765180 --alsologtostderr -v=1: (1.074180401s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-765180 -n newest-cni-765180
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-765180 -n newest-cni-765180: exit status 2 (385.180375ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-765180 -n newest-cni-765180
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-765180 -n newest-cni-765180: exit status 2 (332.333924ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-765180 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-765180 -n newest-cni-765180
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-765180 -n newest-cni-765180
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (94.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m34.5901361s)
--- PASS: TestNetworkPlugins/group/auto/Start (94.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dxglc" [311e6151-bd34-45ba-9f54-83f2a40786f0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004267523s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-dxglc" [311e6151-bd34-45ba-9f54-83f2a40786f0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00578591s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-663983 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-663983 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-663983 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-663983 -n default-k8s-diff-port-663983
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-663983 -n default-k8s-diff-port-663983: exit status 2 (357.609878ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-663983 -n default-k8s-diff-port-663983
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-663983 -n default-k8s-diff-port-663983: exit status 2 (383.059382ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-663983 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-663983 -n default-k8s-diff-port-663983
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-663983 -n default-k8s-diff-port-663983
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.68s)
E1001 20:54:10.343282  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/default-k8s-diff-port-663983/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (52.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1001 20:48:50.491399  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (52.389213548s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (52.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-z6ttv" [29951283-300b-4dc4-8a4c-5bee9d7bb412] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003556863s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-272394 "pgrep -a kubelet"
I1001 20:49:37.437453  741264 config.go:182] Loaded profile config "kindnet-272394": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-272394 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-czrld" [1eca69f0-504a-47b4-a6c2-a9035c327293] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-czrld" [1eca69f0-504a-47b4-a6c2-a9035c327293] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004609605s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-272394 "pgrep -a kubelet"
I1001 20:49:47.501609  741264 config.go:182] Loaded profile config "auto-272394": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-272394 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kc2cs" [4f1e569f-99b7-40dc-b2d4-b89ee17841c4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-kc2cs" [4f1e569f-99b7-40dc-b2d4-b89ee17841c4] Running
E1001 20:49:51.634251  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/old-k8s-version-992970/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:49:51.838623  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003886993s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-272394 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-272394 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (74.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m14.337517648s)
--- PASS: TestNetworkPlugins/group/calico/Start (74.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1001 20:51:06.627626  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/no-preload-381888/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m0.429310453s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-272394 "pgrep -a kubelet"
I1001 20:51:21.558947  741264 config.go:182] Loaded profile config "custom-flannel-272394": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-272394 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tgbd6" [00cac04a-faaa-41f9-b20a-f0ded90b29b9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tgbd6" [00cac04a-faaa-41f9-b20a-f0ded90b29b9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003681404s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6wmh5" [15abaf77-144b-4c56-abd5-9eb0e12074d0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004123287s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-272394 "pgrep -a kubelet"
I1001 20:51:31.337222  741264 config.go:182] Loaded profile config "calico-272394": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-272394 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dphbn" [99f48935-ecb2-45a8-8cc1-e16001bd9270] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dphbn" [99f48935-ecb2-45a8-8cc1-e16001bd9270] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004167837s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-272394 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-272394 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (83.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m23.007996522s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (83.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (58.569989364s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-wj5mf" [93a29fc0-fa28-4598-9424-4d633626c364] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004377211s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-272394 "pgrep -a kubelet"
I1001 20:53:11.987429  741264 config.go:182] Loaded profile config "flannel-272394": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-272394 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z7nhv" [6597fcdc-c37b-4c3f-95bd-cc50ab82a945] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z7nhv" [6597fcdc-c37b-4c3f-95bd-cc50ab82a945] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003865435s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-272394 "pgrep -a kubelet"
I1001 20:53:18.247371  741264 config.go:182] Loaded profile config "enable-default-cni-272394": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-272394 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-j6zld" [4db0e4f5-079d-43a5-baad-b12dcac1279c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-j6zld" [4db0e4f5-079d-43a5-baad-b12dcac1279c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003830138s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-272394 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-272394 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-272394 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (41.596789936s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-272394 "pgrep -a kubelet"
I1001 20:54:26.833310  741264 config.go:182] Loaded profile config "bridge-272394": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-272394 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6bzv6" [353629c2-1a76-46cf-a405-b2f37f4bb74b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6bzv6" [353629c2-1a76-46cf-a405-b2f37f4bb74b] Running
E1001 20:54:31.143119  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/kindnet-272394/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:54:31.149568  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/kindnet-272394/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:54:31.161009  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/kindnet-272394/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:54:31.182450  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/kindnet-272394/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:54:31.223805  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/kindnet-272394/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:54:31.305289  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/kindnet-272394/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:54:31.466867  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/kindnet-272394/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:54:31.788768  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/kindnet-272394/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:54:32.430089  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/kindnet-272394/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:54:33.711539  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/kindnet-272394/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:54:34.906841  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/functional-988381/client.crt: no such file or directory" logger="UnhandledError"
E1001 20:54:36.273570  741264 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/kindnet-272394/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003923911s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-272394 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-272394 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.52s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-084095 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-084095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-084095
--- SKIP: TestDownloadOnlyKic (0.52s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-973905" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-973905
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-272394 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-272394

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-272394

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-272394

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-272394

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-272394

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-272394

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-272394

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-272394

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-272394

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-272394

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-272394

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-272394" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-272394" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19736-735883/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Oct 2024 20:30:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-023668
contexts:
- context:
cluster: pause-023668
extensions:
- extension:
last-update: Tue, 01 Oct 2024 20:30:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-023668
name: pause-023668
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-023668
user:
client-certificate: /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/pause-023668/client.crt
client-key: /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/pause-023668/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-272394

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-272394"

                                                
                                                
----------------------- debugLogs end: kubenet-272394 [took: 3.241503748s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-272394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-272394
--- SKIP: TestNetworkPlugins/group/kubenet (3.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-272394 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-272394" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19736-735883/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 01 Oct 2024 20:30:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-023668
contexts:
- context:
cluster: pause-023668
extensions:
- extension:
last-update: Tue, 01 Oct 2024 20:30:11 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-023668
name: pause-023668
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-023668
user:
client-certificate: /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/pause-023668/client.crt
client-key: /home/jenkins/minikube-integration/19736-735883/.minikube/profiles/pause-023668/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-272394

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-272394" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-272394"

                                                
                                                
----------------------- debugLogs end: cilium-272394 [took: 4.881522005s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-272394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-272394
--- SKIP: TestNetworkPlugins/group/cilium (5.03s)

                                                
                                    
Copied to clipboard